Test Report: Docker_Linux_crio 22094

                    
                      4d318e45b0dac190a241a23c5ddc63ef7c67bab3:2025-12-10:42711
                    
                

Test fail (29/415)

Order failed test Duration
29 TestDownloadOnlyKic 0.89
38 TestAddons/serial/Volcano 0.24
44 TestAddons/parallel/Registry 12.52
45 TestAddons/parallel/RegistryCreds 0.4
46 TestAddons/parallel/Ingress 144.47
47 TestAddons/parallel/InspektorGadget 6.24
48 TestAddons/parallel/MetricsServer 5.3
50 TestAddons/parallel/CSI 37.78
51 TestAddons/parallel/Headlamp 2.43
52 TestAddons/parallel/CloudSpanner 5.24
53 TestAddons/parallel/LocalPath 11.11
54 TestAddons/parallel/NvidiaDevicePlugin 5.25
55 TestAddons/parallel/Yakd 5.25
56 TestAddons/parallel/AmdGpuDevicePlugin 5.26
142 TestFunctional/parallel/ImageCommands/ImageListJson 2.26
257 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile 2.85
294 TestJSONOutput/pause/Command 1.65
300 TestJSONOutput/unpause/Command 1.82
399 TestPause/serial/Pause 6.41
442 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.48
456 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.41
460 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.14
466 TestStartStop/group/old-k8s-version/serial/Pause 6.08
472 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.46
475 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.96
486 TestStartStop/group/newest-cni/serial/Pause 5.93
488 TestStartStop/group/no-preload/serial/Pause 6.05
492 TestStartStop/group/embed-certs/serial/Pause 5.96
496 TestStartStop/group/default-k8s-diff-port/serial/Pause 5.85
x
+
TestDownloadOnlyKic (0.89s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-192359 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:239: expected tarball file "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4" to exist, but got error: stat /home/jenkins/minikube-integration/22094-5725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4: no such file or directory
helpers_test.go:176: Cleaning up "download-docker-192359" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-192359
--- FAIL: TestDownloadOnlyKic (0.89s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.24s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-193927 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-193927 addons disable volcano --alsologtostderr -v=1: exit status 11 (242.543794ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:30:28.776583   20132 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:30:28.776879   20132 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:30:28.776889   20132 out.go:374] Setting ErrFile to fd 2...
	I1210 05:30:28.776893   20132 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:30:28.777505   20132 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 05:30:28.778090   20132 mustload.go:66] Loading cluster: addons-193927
	I1210 05:30:28.778506   20132 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:30:28.778527   20132 addons.go:622] checking whether the cluster is paused
	I1210 05:30:28.778605   20132 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:30:28.778617   20132 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:30:28.778963   20132 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:30:28.796579   20132 ssh_runner.go:195] Run: systemctl --version
	I1210 05:30:28.796625   20132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:30:28.814723   20132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:30:28.907006   20132 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:30:28.907115   20132 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:30:28.934373   20132 cri.go:89] found id: "5d4a1d5da42cdea143fe7688a27cc37ad2f4a146e885ca2f25810e17c009c709"
	I1210 05:30:28.934405   20132 cri.go:89] found id: "cd1f99729cdad01237d94da575e9488a1f060060c59b7858ae362146b66a5f07"
	I1210 05:30:28.934409   20132 cri.go:89] found id: "95ca228da5e9f4d7e909834091b594c45c7208f8d3b2a571abd619c956f77482"
	I1210 05:30:28.934412   20132 cri.go:89] found id: "5b9cf05c0ab5e38bbe9cfe2273f65fd13711e93896f218f41f79c0660b03dc90"
	I1210 05:30:28.934416   20132 cri.go:89] found id: "6c8ffe9e271a1d0db9a74167b5f966efa0dfe72c5e1662e507abbd8e9663fab6"
	I1210 05:30:28.934421   20132 cri.go:89] found id: "42c477d9d74b6c98a8cb5af1e1f7e3db2b09e988a7fae2733bc43265b154797e"
	I1210 05:30:28.934425   20132 cri.go:89] found id: "dc84fc8b0d7ae154e64ef5052f253bba4217a7a5e867c4712f16ca97cf539e99"
	I1210 05:30:28.934429   20132 cri.go:89] found id: "217c6052689f8c587e315acc25a1b2849ce25e9b39451148233d1f6aa28f814e"
	I1210 05:30:28.934434   20132 cri.go:89] found id: "f02ac84563fd8d04d4258d15032b4be710d23f174fe6977d0c77a2b2231ceb66"
	I1210 05:30:28.934447   20132 cri.go:89] found id: "c2dd148c15de2f4ce8a5067f1648f58cbe34599d18b462157fbe53d635a2ae2d"
	I1210 05:30:28.934455   20132 cri.go:89] found id: "bd251ebea34ff80ac352d3659aca4e9dd92516b5b29e42918a88320e6d6c00a0"
	I1210 05:30:28.934459   20132 cri.go:89] found id: "976a8b19e2a981db8eb4cccab7c5e66c6de34da6ca5d67769e3041ff93464bb0"
	I1210 05:30:28.934463   20132 cri.go:89] found id: "7a65eea81e573477a1e4b111a57afc5d01badf2c22b3244ab34f401df736478b"
	I1210 05:30:28.934466   20132 cri.go:89] found id: "05be8bf506f18516a5e7ba92ec9ee9f1ddb3e678cbc2fbd6fa67ed3d79c01d6f"
	I1210 05:30:28.934469   20132 cri.go:89] found id: "c2e8fc6eb52c03a13e3410eba38a1f93510543ca9cc1f2dce8cf44f724ebb51e"
	I1210 05:30:28.934474   20132 cri.go:89] found id: "cf1e8860d68b3fed3b954f03825d2e52dc0a76a1d91f34d013990bee525f9ba1"
	I1210 05:30:28.934479   20132 cri.go:89] found id: "3db45466cabf41054b120f3c6070f1ec70a8b2841948afaab355e73b36c7f163"
	I1210 05:30:28.934483   20132 cri.go:89] found id: "a56c2752b1ef94dc626cb6a5ebe9da70da07ed988ba80bc8dbc476de7200232b"
	I1210 05:30:28.934486   20132 cri.go:89] found id: "367aea18176f031be6232fd30a314c767c7759fec05c5e3ffdaf569336ad6525"
	I1210 05:30:28.934488   20132 cri.go:89] found id: "206f9657e022657209d8593c82dd3d5694e511c41253aa91adaa9064170bed8c"
	I1210 05:30:28.934492   20132 cri.go:89] found id: "6501c9a3d5552292acb572b481eea754ae6f17f2913f63dc303d6291da022ed6"
	I1210 05:30:28.934494   20132 cri.go:89] found id: "0a2be4003b1b30e3df7421633de714b1825d05f4ed06a10d8a16f03f12641dd3"
	I1210 05:30:28.934497   20132 cri.go:89] found id: "3b5e4f42b79e944fc79e354eea5dfaeef38e9a172b426d0cd69186d52604413a"
	I1210 05:30:28.934500   20132 cri.go:89] found id: "b2eb3db5b9910016ae4a73bcd8196a9aed9e2b0ea078772712f9b76865532a26"
	I1210 05:30:28.934502   20132 cri.go:89] found id: ""
	I1210 05:30:28.934546   20132 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 05:30:28.948314   20132 out.go:203] 
	W1210 05:30:28.949950   20132 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:30:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:30:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 05:30:28.949971   20132 out.go:285] * 
	* 
	W1210 05:30:28.952962   20132 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:30:28.954157   20132 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-193927 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.24s)

                                                
                                    
x
+
TestAddons/parallel/Registry (12.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 3.349429ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-h4d7x" [273724ba-34a3-45fb-bcdf-0ec690ef2c3d] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002699897s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-jr8xs" [a219a574-aafe-429a-ae24-5e8f21f31910] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002753371s
addons_test.go:394: (dbg) Run:  kubectl --context addons-193927 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-193927 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-193927 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.079936713s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-193927 ip
2025/12/10 05:30:50 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-193927 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-193927 addons disable registry --alsologtostderr -v=1: exit status 11 (236.668735ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:30:50.054898   21988 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:30:50.055185   21988 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:30:50.055195   21988 out.go:374] Setting ErrFile to fd 2...
	I1210 05:30:50.055199   21988 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:30:50.055377   21988 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 05:30:50.055606   21988 mustload.go:66] Loading cluster: addons-193927
	I1210 05:30:50.055892   21988 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:30:50.055910   21988 addons.go:622] checking whether the cluster is paused
	I1210 05:30:50.055983   21988 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:30:50.055995   21988 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:30:50.056374   21988 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:30:50.073556   21988 ssh_runner.go:195] Run: systemctl --version
	I1210 05:30:50.073604   21988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:30:50.090419   21988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:30:50.188481   21988 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:30:50.188554   21988 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:30:50.215735   21988 cri.go:89] found id: "5d4a1d5da42cdea143fe7688a27cc37ad2f4a146e885ca2f25810e17c009c709"
	I1210 05:30:50.215760   21988 cri.go:89] found id: "cd1f99729cdad01237d94da575e9488a1f060060c59b7858ae362146b66a5f07"
	I1210 05:30:50.215765   21988 cri.go:89] found id: "95ca228da5e9f4d7e909834091b594c45c7208f8d3b2a571abd619c956f77482"
	I1210 05:30:50.215769   21988 cri.go:89] found id: "5b9cf05c0ab5e38bbe9cfe2273f65fd13711e93896f218f41f79c0660b03dc90"
	I1210 05:30:50.215772   21988 cri.go:89] found id: "6c8ffe9e271a1d0db9a74167b5f966efa0dfe72c5e1662e507abbd8e9663fab6"
	I1210 05:30:50.215775   21988 cri.go:89] found id: "42c477d9d74b6c98a8cb5af1e1f7e3db2b09e988a7fae2733bc43265b154797e"
	I1210 05:30:50.215778   21988 cri.go:89] found id: "dc84fc8b0d7ae154e64ef5052f253bba4217a7a5e867c4712f16ca97cf539e99"
	I1210 05:30:50.215781   21988 cri.go:89] found id: "217c6052689f8c587e315acc25a1b2849ce25e9b39451148233d1f6aa28f814e"
	I1210 05:30:50.215784   21988 cri.go:89] found id: "f02ac84563fd8d04d4258d15032b4be710d23f174fe6977d0c77a2b2231ceb66"
	I1210 05:30:50.215796   21988 cri.go:89] found id: "c2dd148c15de2f4ce8a5067f1648f58cbe34599d18b462157fbe53d635a2ae2d"
	I1210 05:30:50.215804   21988 cri.go:89] found id: "bd251ebea34ff80ac352d3659aca4e9dd92516b5b29e42918a88320e6d6c00a0"
	I1210 05:30:50.215808   21988 cri.go:89] found id: "976a8b19e2a981db8eb4cccab7c5e66c6de34da6ca5d67769e3041ff93464bb0"
	I1210 05:30:50.215816   21988 cri.go:89] found id: "7a65eea81e573477a1e4b111a57afc5d01badf2c22b3244ab34f401df736478b"
	I1210 05:30:50.215821   21988 cri.go:89] found id: "05be8bf506f18516a5e7ba92ec9ee9f1ddb3e678cbc2fbd6fa67ed3d79c01d6f"
	I1210 05:30:50.215828   21988 cri.go:89] found id: "c2e8fc6eb52c03a13e3410eba38a1f93510543ca9cc1f2dce8cf44f724ebb51e"
	I1210 05:30:50.215839   21988 cri.go:89] found id: "cf1e8860d68b3fed3b954f03825d2e52dc0a76a1d91f34d013990bee525f9ba1"
	I1210 05:30:50.215846   21988 cri.go:89] found id: "3db45466cabf41054b120f3c6070f1ec70a8b2841948afaab355e73b36c7f163"
	I1210 05:30:50.215852   21988 cri.go:89] found id: "a56c2752b1ef94dc626cb6a5ebe9da70da07ed988ba80bc8dbc476de7200232b"
	I1210 05:30:50.215857   21988 cri.go:89] found id: "367aea18176f031be6232fd30a314c767c7759fec05c5e3ffdaf569336ad6525"
	I1210 05:30:50.215861   21988 cri.go:89] found id: "206f9657e022657209d8593c82dd3d5694e511c41253aa91adaa9064170bed8c"
	I1210 05:30:50.215864   21988 cri.go:89] found id: "6501c9a3d5552292acb572b481eea754ae6f17f2913f63dc303d6291da022ed6"
	I1210 05:30:50.215867   21988 cri.go:89] found id: "0a2be4003b1b30e3df7421633de714b1825d05f4ed06a10d8a16f03f12641dd3"
	I1210 05:30:50.215870   21988 cri.go:89] found id: "3b5e4f42b79e944fc79e354eea5dfaeef38e9a172b426d0cd69186d52604413a"
	I1210 05:30:50.215873   21988 cri.go:89] found id: "b2eb3db5b9910016ae4a73bcd8196a9aed9e2b0ea078772712f9b76865532a26"
	I1210 05:30:50.215878   21988 cri.go:89] found id: ""
	I1210 05:30:50.215931   21988 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 05:30:50.229146   21988 out.go:203] 
	W1210 05:30:50.230384   21988 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:30:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:30:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 05:30:50.230403   21988 out.go:285] * 
	* 
	W1210 05:30:50.233288   21988 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:30:50.234438   21988 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-193927 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (12.52s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.4s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 3.399966ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-193927
addons_test.go:334: (dbg) Run:  kubectl --context addons-193927 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-193927 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-193927 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (232.543355ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:30:56.777881   23400 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:30:56.778022   23400 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:30:56.778032   23400 out.go:374] Setting ErrFile to fd 2...
	I1210 05:30:56.778036   23400 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:30:56.778221   23400 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 05:30:56.778469   23400 mustload.go:66] Loading cluster: addons-193927
	I1210 05:30:56.778746   23400 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:30:56.778764   23400 addons.go:622] checking whether the cluster is paused
	I1210 05:30:56.778848   23400 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:30:56.778861   23400 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:30:56.779236   23400 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:30:56.796423   23400 ssh_runner.go:195] Run: systemctl --version
	I1210 05:30:56.796468   23400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:30:56.812727   23400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:30:56.907343   23400 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:30:56.907446   23400 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:30:56.935861   23400 cri.go:89] found id: "5d4a1d5da42cdea143fe7688a27cc37ad2f4a146e885ca2f25810e17c009c709"
	I1210 05:30:56.935896   23400 cri.go:89] found id: "cd1f99729cdad01237d94da575e9488a1f060060c59b7858ae362146b66a5f07"
	I1210 05:30:56.935901   23400 cri.go:89] found id: "95ca228da5e9f4d7e909834091b594c45c7208f8d3b2a571abd619c956f77482"
	I1210 05:30:56.935904   23400 cri.go:89] found id: "5b9cf05c0ab5e38bbe9cfe2273f65fd13711e93896f218f41f79c0660b03dc90"
	I1210 05:30:56.935906   23400 cri.go:89] found id: "6c8ffe9e271a1d0db9a74167b5f966efa0dfe72c5e1662e507abbd8e9663fab6"
	I1210 05:30:56.935910   23400 cri.go:89] found id: "42c477d9d74b6c98a8cb5af1e1f7e3db2b09e988a7fae2733bc43265b154797e"
	I1210 05:30:56.935913   23400 cri.go:89] found id: "dc84fc8b0d7ae154e64ef5052f253bba4217a7a5e867c4712f16ca97cf539e99"
	I1210 05:30:56.935915   23400 cri.go:89] found id: "217c6052689f8c587e315acc25a1b2849ce25e9b39451148233d1f6aa28f814e"
	I1210 05:30:56.935918   23400 cri.go:89] found id: "f02ac84563fd8d04d4258d15032b4be710d23f174fe6977d0c77a2b2231ceb66"
	I1210 05:30:56.935927   23400 cri.go:89] found id: "c2dd148c15de2f4ce8a5067f1648f58cbe34599d18b462157fbe53d635a2ae2d"
	I1210 05:30:56.935930   23400 cri.go:89] found id: "bd251ebea34ff80ac352d3659aca4e9dd92516b5b29e42918a88320e6d6c00a0"
	I1210 05:30:56.935933   23400 cri.go:89] found id: "976a8b19e2a981db8eb4cccab7c5e66c6de34da6ca5d67769e3041ff93464bb0"
	I1210 05:30:56.935935   23400 cri.go:89] found id: "7a65eea81e573477a1e4b111a57afc5d01badf2c22b3244ab34f401df736478b"
	I1210 05:30:56.935938   23400 cri.go:89] found id: "05be8bf506f18516a5e7ba92ec9ee9f1ddb3e678cbc2fbd6fa67ed3d79c01d6f"
	I1210 05:30:56.935941   23400 cri.go:89] found id: "c2e8fc6eb52c03a13e3410eba38a1f93510543ca9cc1f2dce8cf44f724ebb51e"
	I1210 05:30:56.935948   23400 cri.go:89] found id: "cf1e8860d68b3fed3b954f03825d2e52dc0a76a1d91f34d013990bee525f9ba1"
	I1210 05:30:56.935953   23400 cri.go:89] found id: "3db45466cabf41054b120f3c6070f1ec70a8b2841948afaab355e73b36c7f163"
	I1210 05:30:56.935957   23400 cri.go:89] found id: "a56c2752b1ef94dc626cb6a5ebe9da70da07ed988ba80bc8dbc476de7200232b"
	I1210 05:30:56.935960   23400 cri.go:89] found id: "367aea18176f031be6232fd30a314c767c7759fec05c5e3ffdaf569336ad6525"
	I1210 05:30:56.935963   23400 cri.go:89] found id: "206f9657e022657209d8593c82dd3d5694e511c41253aa91adaa9064170bed8c"
	I1210 05:30:56.935967   23400 cri.go:89] found id: "6501c9a3d5552292acb572b481eea754ae6f17f2913f63dc303d6291da022ed6"
	I1210 05:30:56.935970   23400 cri.go:89] found id: "0a2be4003b1b30e3df7421633de714b1825d05f4ed06a10d8a16f03f12641dd3"
	I1210 05:30:56.935972   23400 cri.go:89] found id: "3b5e4f42b79e944fc79e354eea5dfaeef38e9a172b426d0cd69186d52604413a"
	I1210 05:30:56.935974   23400 cri.go:89] found id: "b2eb3db5b9910016ae4a73bcd8196a9aed9e2b0ea078772712f9b76865532a26"
	I1210 05:30:56.935977   23400 cri.go:89] found id: ""
	I1210 05:30:56.936012   23400 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 05:30:56.948978   23400 out.go:203] 
	W1210 05:30:56.950062   23400 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:30:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:30:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 05:30:56.950103   23400 out.go:285] * 
	* 
	W1210 05:30:56.952976   23400 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:30:56.954065   23400 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-193927 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.40s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (144.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-193927 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-193927 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-193927 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [2f4a165f-cf72-415b-b5cf-1f9a898dfd94] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [2f4a165f-cf72-415b-b5cf-1f9a898dfd94] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003053632s
I1210 05:30:58.665790    9253 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-193927 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-193927 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.085014966s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-193927 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-193927 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-193927
helpers_test.go:244: (dbg) docker inspect addons-193927:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d9822419bc1196f4aa4320a2438080f0e8206aefc2d38ac282fc76185ca90e8d",
	        "Created": "2025-12-10T05:29:04.370422332Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 11700,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T05:29:04.401985856Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/d9822419bc1196f4aa4320a2438080f0e8206aefc2d38ac282fc76185ca90e8d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d9822419bc1196f4aa4320a2438080f0e8206aefc2d38ac282fc76185ca90e8d/hostname",
	        "HostsPath": "/var/lib/docker/containers/d9822419bc1196f4aa4320a2438080f0e8206aefc2d38ac282fc76185ca90e8d/hosts",
	        "LogPath": "/var/lib/docker/containers/d9822419bc1196f4aa4320a2438080f0e8206aefc2d38ac282fc76185ca90e8d/d9822419bc1196f4aa4320a2438080f0e8206aefc2d38ac282fc76185ca90e8d-json.log",
	        "Name": "/addons-193927",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-193927:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-193927",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d9822419bc1196f4aa4320a2438080f0e8206aefc2d38ac282fc76185ca90e8d",
	                "LowerDir": "/var/lib/docker/overlay2/ea6ebd6640cac5aa52f1c85b843c3940c4cf37feae8399570705f14c1d15272c-init/diff:/var/lib/docker/overlay2/b62e2f8db4877fd6b32453256d2aeab173581bfdfbed6c87a5c3b6dd49dbb983/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ea6ebd6640cac5aa52f1c85b843c3940c4cf37feae8399570705f14c1d15272c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ea6ebd6640cac5aa52f1c85b843c3940c4cf37feae8399570705f14c1d15272c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ea6ebd6640cac5aa52f1c85b843c3940c4cf37feae8399570705f14c1d15272c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-193927",
	                "Source": "/var/lib/docker/volumes/addons-193927/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-193927",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-193927",
	                "name.minikube.sigs.k8s.io": "addons-193927",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "85703a43c0c5a932407537da90729dd6048aa9a745c1e0574e64f661747b9863",
	            "SandboxKey": "/var/run/docker/netns/85703a43c0c5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-193927": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d72278174f2f83a56b57bff0dfa7876641b8e88aefe937e9b34b3af1750bdc5d",
	                    "EndpointID": "209ba600f17d85ad1770fffe769fe7b8c00c26e435203a836cf6af1fc41934d1",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "e2:0c:b9:b3:f5:4c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-193927",
	                        "d9822419bc11"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-193927 -n addons-193927
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-193927 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-193927 logs -n 25: (1.092341935s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-899655 --alsologtostderr --binary-mirror http://127.0.0.1:36067 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-899655 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │                     │
	│ delete  │ -p binary-mirror-899655                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-899655 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ addons  │ disable dashboard -p addons-193927                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-193927        │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │                     │
	│ addons  │ enable dashboard -p addons-193927                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-193927        │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │                     │
	│ start   │ -p addons-193927 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-193927        │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:30 UTC │
	│ addons  │ addons-193927 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-193927        │ jenkins │ v1.37.0 │ 10 Dec 25 05:30 UTC │                     │
	│ addons  │ addons-193927 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-193927        │ jenkins │ v1.37.0 │ 10 Dec 25 05:30 UTC │                     │
	│ addons  │ enable headlamp -p addons-193927 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-193927        │ jenkins │ v1.37.0 │ 10 Dec 25 05:30 UTC │                     │
	│ addons  │ addons-193927 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-193927        │ jenkins │ v1.37.0 │ 10 Dec 25 05:30 UTC │                     │
	│ addons  │ addons-193927 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-193927        │ jenkins │ v1.37.0 │ 10 Dec 25 05:30 UTC │                     │
	│ addons  │ addons-193927 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-193927        │ jenkins │ v1.37.0 │ 10 Dec 25 05:30 UTC │                     │
	│ addons  │ addons-193927 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-193927        │ jenkins │ v1.37.0 │ 10 Dec 25 05:30 UTC │                     │
	│ addons  │ addons-193927 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-193927        │ jenkins │ v1.37.0 │ 10 Dec 25 05:30 UTC │                     │
	│ ip      │ addons-193927 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-193927        │ jenkins │ v1.37.0 │ 10 Dec 25 05:30 UTC │ 10 Dec 25 05:30 UTC │
	│ addons  │ addons-193927 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-193927        │ jenkins │ v1.37.0 │ 10 Dec 25 05:30 UTC │                     │
	│ addons  │ addons-193927 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-193927        │ jenkins │ v1.37.0 │ 10 Dec 25 05:30 UTC │                     │
	│ ssh     │ addons-193927 ssh cat /opt/local-path-provisioner/pvc-474dcd7d-97ea-4af6-9477-35c3227c923f_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-193927        │ jenkins │ v1.37.0 │ 10 Dec 25 05:30 UTC │ 10 Dec 25 05:30 UTC │
	│ addons  │ addons-193927 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-193927        │ jenkins │ v1.37.0 │ 10 Dec 25 05:30 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-193927                                                                                                                                                                                                                                                                                                                                                                                           │ addons-193927        │ jenkins │ v1.37.0 │ 10 Dec 25 05:30 UTC │ 10 Dec 25 05:30 UTC │
	│ addons  │ addons-193927 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-193927        │ jenkins │ v1.37.0 │ 10 Dec 25 05:30 UTC │                     │
	│ ssh     │ addons-193927 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-193927        │ jenkins │ v1.37.0 │ 10 Dec 25 05:30 UTC │                     │
	│ addons  │ addons-193927 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-193927        │ jenkins │ v1.37.0 │ 10 Dec 25 05:30 UTC │                     │
	│ addons  │ addons-193927 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-193927        │ jenkins │ v1.37.0 │ 10 Dec 25 05:31 UTC │                     │
	│ addons  │ addons-193927 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-193927        │ jenkins │ v1.37.0 │ 10 Dec 25 05:31 UTC │                     │
	│ ip      │ addons-193927 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-193927        │ jenkins │ v1.37.0 │ 10 Dec 25 05:33 UTC │ 10 Dec 25 05:33 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:28:46
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:28:46.275231   11129 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:28:46.275342   11129 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:28:46.275351   11129 out.go:374] Setting ErrFile to fd 2...
	I1210 05:28:46.275354   11129 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:28:46.275533   11129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 05:28:46.276036   11129 out.go:368] Setting JSON to false
	I1210 05:28:46.276798   11129 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":670,"bootTime":1765343856,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:28:46.276879   11129 start.go:143] virtualization: kvm guest
	I1210 05:28:46.278506   11129 out.go:179] * [addons-193927] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 05:28:46.279476   11129 notify.go:221] Checking for updates...
	I1210 05:28:46.279489   11129 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 05:28:46.280427   11129 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:28:46.281443   11129 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 05:28:46.282478   11129 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube
	I1210 05:28:46.283379   11129 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 05:28:46.284217   11129 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:28:46.285244   11129 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:28:46.308689   11129 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 05:28:46.308781   11129 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:28:46.362767   11129 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-10 05:28:46.353762855 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 05:28:46.362878   11129 docker.go:319] overlay module found
	I1210 05:28:46.364328   11129 out.go:179] * Using the docker driver based on user configuration
	I1210 05:28:46.365291   11129 start.go:309] selected driver: docker
	I1210 05:28:46.365305   11129 start.go:927] validating driver "docker" against <nil>
	I1210 05:28:46.365315   11129 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:28:46.365837   11129 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:28:46.417328   11129 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-10 05:28:46.407268077 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 05:28:46.417526   11129 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 05:28:46.417752   11129 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 05:28:46.419169   11129 out.go:179] * Using Docker driver with root privileges
	I1210 05:28:46.420109   11129 cni.go:84] Creating CNI manager for ""
	I1210 05:28:46.420178   11129 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 05:28:46.420192   11129 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 05:28:46.420270   11129 start.go:353] cluster config:
	{Name:addons-193927 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-193927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1210 05:28:46.421319   11129 out.go:179] * Starting "addons-193927" primary control-plane node in "addons-193927" cluster
	I1210 05:28:46.422150   11129 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 05:28:46.423103   11129 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 05:28:46.423951   11129 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 05:28:46.423981   11129 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 05:28:46.439627   11129 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1210 05:28:46.439725   11129 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1210 05:28:46.439753   11129 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory, skipping pull
	I1210 05:28:46.439762   11129 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in cache, skipping pull
	I1210 05:28:46.439772   11129 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f as a tarball
	I1210 05:28:46.439782   11129 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from local cache
	W1210 05:28:46.448206   11129 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1210 05:28:46.533008   11129 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1210 05:28:46.533271   11129 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:28:46.533398   11129 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/config.json ...
	I1210 05:28:46.533426   11129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/config.json: {Name:mk15220b80d6396ef85d3cd2c5fbeb1c706f7513 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:28:46.662887   11129 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:28:46.791462   11129 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:28:46.928008   11129 cache.go:107] acquiring lock: {Name:mk796942baeaa838a47daad2be5ca7532234da42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:46.928034   11129 cache.go:107] acquiring lock: {Name:mkdd768341d1a3481ecaec697219b32d4a715834 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:46.928040   11129 cache.go:107] acquiring lock: {Name:mkcb073544c2d92de0e0765e38c37b4f4d2ac46b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:46.928045   11129 cache.go:107] acquiring lock: {Name:mk4839690ba979036496a7cee1de2814aaad3bf0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:46.927996   11129 cache.go:107] acquiring lock: {Name:mk0763a50664c56b0862900e71862307cba94d4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:46.928004   11129 cache.go:107] acquiring lock: {Name:mkc3a95f67321b2fa8faeb966829fb60cf65d25d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:46.928134   11129 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1210 05:28:46.928153   11129 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1210 05:28:46.928165   11129 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 exists
	I1210 05:28:46.928168   11129 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 166.131µs
	I1210 05:28:46.928182   11129 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1210 05:28:46.928139   11129 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 05:28:46.928183   11129 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3" took 143.143µs
	I1210 05:28:46.928195   11129 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 succeeded
	I1210 05:28:46.928195   11129 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 211.169µs
	I1210 05:28:46.928162   11129 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 131.166µs
	I1210 05:28:46.928203   11129 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 05:28:46.928205   11129 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1210 05:28:46.928139   11129 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 exists
	I1210 05:28:46.928216   11129 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3" took 173.939µs
	I1210 05:28:46.928228   11129 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 succeeded
	I1210 05:28:46.928188   11129 cache.go:107] acquiring lock: {Name:mk4d792f4bac33dc8779d7cc5ff40393c94e0ea4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:46.928207   11129 cache.go:107] acquiring lock: {Name:mkd670cede0997c7eb0e9bd388a82e1cb2741031 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:46.928261   11129 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 exists
	I1210 05:28:46.928282   11129 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3" took 291.88µs
	I1210 05:28:46.928297   11129 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 succeeded
	I1210 05:28:46.928310   11129 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 exists
	I1210 05:28:46.928333   11129 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3" took 223.365µs
	I1210 05:28:46.928350   11129 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 succeeded
	I1210 05:28:46.928317   11129 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1210 05:28:46.928369   11129 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 229.006µs
	I1210 05:28:46.928379   11129 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 05:28:46.928386   11129 cache.go:87] Successfully saved all images to host disk.
	I1210 05:29:00.009752   11129 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from cached tarball
	I1210 05:29:00.009791   11129 cache.go:243] Successfully downloaded all kic artifacts
	I1210 05:29:00.009842   11129 start.go:360] acquireMachinesLock for addons-193927: {Name:mk44c4bc22782f28a1ec2fd1a231e15d9422e280 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:29:00.009950   11129 start.go:364] duration metric: took 86.083µs to acquireMachinesLock for "addons-193927"
	I1210 05:29:00.009981   11129 start.go:93] Provisioning new machine with config: &{Name:addons-193927 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-193927 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 05:29:00.010055   11129 start.go:125] createHost starting for "" (driver="docker")
	I1210 05:29:00.161578   11129 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1210 05:29:00.161863   11129 start.go:159] libmachine.API.Create for "addons-193927" (driver="docker")
	I1210 05:29:00.161892   11129 client.go:173] LocalClient.Create starting
	I1210 05:29:00.162031   11129 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem
	I1210 05:29:00.225801   11129 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem
	I1210 05:29:00.288616   11129 cli_runner.go:164] Run: docker network inspect addons-193927 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 05:29:00.305731   11129 cli_runner.go:211] docker network inspect addons-193927 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 05:29:00.305797   11129 network_create.go:284] running [docker network inspect addons-193927] to gather additional debugging logs...
	I1210 05:29:00.305816   11129 cli_runner.go:164] Run: docker network inspect addons-193927
	W1210 05:29:00.321253   11129 cli_runner.go:211] docker network inspect addons-193927 returned with exit code 1
	I1210 05:29:00.321278   11129 network_create.go:287] error running [docker network inspect addons-193927]: docker network inspect addons-193927: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-193927 not found
	I1210 05:29:00.321290   11129 network_create.go:289] output of [docker network inspect addons-193927]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-193927 not found
	
	** /stderr **
	I1210 05:29:00.321392   11129 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 05:29:00.337561   11129 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fb2a80}
	I1210 05:29:00.337602   11129 network_create.go:124] attempt to create docker network addons-193927 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1210 05:29:00.337657   11129 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-193927 addons-193927
	I1210 05:29:00.628488   11129 network_create.go:108] docker network addons-193927 192.168.49.0/24 created
	I1210 05:29:00.628519   11129 kic.go:121] calculated static IP "192.168.49.2" for the "addons-193927" container
	I1210 05:29:00.628574   11129 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 05:29:00.643678   11129 cli_runner.go:164] Run: docker volume create addons-193927 --label name.minikube.sigs.k8s.io=addons-193927 --label created_by.minikube.sigs.k8s.io=true
	I1210 05:29:00.698887   11129 oci.go:103] Successfully created a docker volume addons-193927
	I1210 05:29:00.698962   11129 cli_runner.go:164] Run: docker run --rm --name addons-193927-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-193927 --entrypoint /usr/bin/test -v addons-193927:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 05:29:04.298271   11129 cli_runner.go:217] Completed: docker run --rm --name addons-193927-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-193927 --entrypoint /usr/bin/test -v addons-193927:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (3.599273799s)
	I1210 05:29:04.298306   11129 oci.go:107] Successfully prepared a docker volume addons-193927
	I1210 05:29:04.298353   11129 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	W1210 05:29:04.298430   11129 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1210 05:29:04.298461   11129 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1210 05:29:04.298500   11129 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 05:29:04.353897   11129 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-193927 --name addons-193927 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-193927 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-193927 --network addons-193927 --ip 192.168.49.2 --volume addons-193927:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 05:29:04.623551   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Running}}
	I1210 05:29:04.642270   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:04.660724   11129 cli_runner.go:164] Run: docker exec addons-193927 stat /var/lib/dpkg/alternatives/iptables
	I1210 05:29:04.705803   11129 oci.go:144] the created container "addons-193927" has a running status.
	I1210 05:29:04.705835   11129 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa...
	I1210 05:29:04.744406   11129 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 05:29:04.771674   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:04.788869   11129 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 05:29:04.788887   11129 kic_runner.go:114] Args: [docker exec --privileged addons-193927 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 05:29:04.826184   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:04.843653   11129 machine.go:94] provisionDockerMachine start ...
	I1210 05:29:04.843756   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:04.863159   11129 main.go:143] libmachine: Using SSH client type: native
	I1210 05:29:04.863505   11129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1210 05:29:04.863525   11129 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 05:29:04.864985   11129 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54540->127.0.0.1:32768: read: connection reset by peer
	I1210 05:29:07.994255   11129 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-193927
	
	I1210 05:29:07.994284   11129 ubuntu.go:182] provisioning hostname "addons-193927"
	I1210 05:29:07.994353   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:08.010506   11129 main.go:143] libmachine: Using SSH client type: native
	I1210 05:29:08.010699   11129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1210 05:29:08.010711   11129 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-193927 && echo "addons-193927" | sudo tee /etc/hostname
	I1210 05:29:08.146563   11129 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-193927
	
	I1210 05:29:08.146635   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:08.162745   11129 main.go:143] libmachine: Using SSH client type: native
	I1210 05:29:08.162945   11129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1210 05:29:08.162960   11129 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-193927' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-193927/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-193927' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 05:29:08.289777   11129 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 05:29:08.289800   11129 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-5725/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-5725/.minikube}
	I1210 05:29:08.289820   11129 ubuntu.go:190] setting up certificates
	I1210 05:29:08.289831   11129 provision.go:84] configureAuth start
	I1210 05:29:08.289876   11129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-193927
	I1210 05:29:08.305988   11129 provision.go:143] copyHostCerts
	I1210 05:29:08.306056   11129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem (1679 bytes)
	I1210 05:29:08.306201   11129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem (1078 bytes)
	I1210 05:29:08.306277   11129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem (1123 bytes)
	I1210 05:29:08.306339   11129 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem org=jenkins.addons-193927 san=[127.0.0.1 192.168.49.2 addons-193927 localhost minikube]
	I1210 05:29:08.549873   11129 provision.go:177] copyRemoteCerts
	I1210 05:29:08.549921   11129 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 05:29:08.549955   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:08.566278   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:08.659091   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 05:29:08.676250   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 05:29:08.691401   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 05:29:08.706527   11129 provision.go:87] duration metric: took 416.685244ms to configureAuth
	I1210 05:29:08.706547   11129 ubuntu.go:206] setting minikube options for container-runtime
	I1210 05:29:08.706690   11129 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:29:08.706772   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:08.723018   11129 main.go:143] libmachine: Using SSH client type: native
	I1210 05:29:08.723244   11129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1210 05:29:08.723260   11129 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 05:29:08.980428   11129 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 05:29:08.980457   11129 machine.go:97] duration metric: took 4.136780163s to provisionDockerMachine
	I1210 05:29:08.980471   11129 client.go:176] duration metric: took 8.81857247s to LocalClient.Create
	I1210 05:29:08.980501   11129 start.go:167] duration metric: took 8.818636186s to libmachine.API.Create "addons-193927"
	I1210 05:29:08.980512   11129 start.go:293] postStartSetup for "addons-193927" (driver="docker")
	I1210 05:29:08.980530   11129 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 05:29:08.980620   11129 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 05:29:08.980669   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:08.997522   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:09.091671   11129 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 05:29:09.094738   11129 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 05:29:09.094760   11129 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 05:29:09.094769   11129 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/addons for local assets ...
	I1210 05:29:09.094827   11129 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/files for local assets ...
	I1210 05:29:09.094854   11129 start.go:296] duration metric: took 114.330928ms for postStartSetup
	I1210 05:29:09.095156   11129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-193927
	I1210 05:29:09.111186   11129 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/config.json ...
	I1210 05:29:09.111422   11129 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 05:29:09.111464   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:09.126832   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:09.217267   11129 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 05:29:09.221244   11129 start.go:128] duration metric: took 9.211177559s to createHost
	I1210 05:29:09.221261   11129 start.go:83] releasing machines lock for "addons-193927", held for 9.211297358s
	I1210 05:29:09.221319   11129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-193927
	I1210 05:29:09.237847   11129 ssh_runner.go:195] Run: cat /version.json
	I1210 05:29:09.237885   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:09.237944   11129 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 05:29:09.238021   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:09.254879   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:09.255073   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:09.399543   11129 ssh_runner.go:195] Run: systemctl --version
	I1210 05:29:09.405292   11129 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 05:29:09.434774   11129 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 05:29:09.438732   11129 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 05:29:09.438781   11129 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 05:29:09.461898   11129 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 05:29:09.461912   11129 start.go:496] detecting cgroup driver to use...
	I1210 05:29:09.461936   11129 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 05:29:09.461967   11129 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 05:29:09.476242   11129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 05:29:09.486646   11129 docker.go:218] disabling cri-docker service (if available) ...
	I1210 05:29:09.486688   11129 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 05:29:09.500959   11129 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 05:29:09.515972   11129 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 05:29:09.592464   11129 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 05:29:09.673555   11129 docker.go:234] disabling docker service ...
	I1210 05:29:09.673608   11129 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 05:29:09.689774   11129 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 05:29:09.700819   11129 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 05:29:09.778913   11129 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 05:29:09.854864   11129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 05:29:09.865758   11129 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 05:29:09.878150   11129 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:29:10.003553   11129 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 05:29:10.003613   11129 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:29:10.013761   11129 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 05:29:10.013816   11129 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:29:10.022038   11129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:29:10.030054   11129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:29:10.037707   11129 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 05:29:10.044862   11129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:29:10.052412   11129 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:29:10.064248   11129 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:29:10.072049   11129 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 05:29:10.078307   11129 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 05:29:10.078342   11129 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 05:29:10.089168   11129 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 05:29:10.095636   11129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:29:10.171488   11129 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 05:29:10.294427   11129 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 05:29:10.294503   11129 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 05:29:10.297976   11129 start.go:564] Will wait 60s for crictl version
	I1210 05:29:10.298027   11129 ssh_runner.go:195] Run: which crictl
	I1210 05:29:10.301253   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 05:29:10.324261   11129 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 05:29:10.324360   11129 ssh_runner.go:195] Run: crio --version
	I1210 05:29:10.350430   11129 ssh_runner.go:195] Run: crio --version
	I1210 05:29:10.378598   11129 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1210 05:29:10.379733   11129 cli_runner.go:164] Run: docker network inspect addons-193927 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 05:29:10.395487   11129 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 05:29:10.399157   11129 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 05:29:10.408552   11129 kubeadm.go:884] updating cluster {Name:addons-193927 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-193927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 05:29:10.408714   11129 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:29:10.536279   11129 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:29:10.659025   11129 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:29:10.789208   11129 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 05:29:10.789269   11129 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 05:29:10.811164   11129 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.3". assuming images are not preloaded.
	I1210 05:29:10.811187   11129 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.3 registry.k8s.io/kube-controller-manager:v1.34.3 registry.k8s.io/kube-scheduler:v1.34.3 registry.k8s.io/kube-proxy:v1.34.3 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 05:29:10.811293   11129 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:10.811313   11129 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:10.811328   11129 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 05:29:10.811337   11129 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:10.811360   11129 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:10.811259   11129 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:10.811249   11129 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:10.811271   11129 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:10.812455   11129 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:10.812455   11129 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:10.812496   11129 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 05:29:10.812529   11129 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:10.812459   11129 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:10.812456   11129 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:10.812465   11129 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:10.812859   11129 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:10.967332   11129 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:10.974231   11129 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:10.977609   11129 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:10.981393   11129 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:10.991101   11129 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:11.000681   11129 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1210 05:29:11.000720   11129 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:11.000761   11129 ssh_runner.go:195] Run: which crictl
	I1210 05:29:11.002070   11129 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:11.009455   11129 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1210 05:29:11.009501   11129 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:11.009550   11129 ssh_runner.go:195] Run: which crictl
	I1210 05:29:11.011013   11129 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1210 05:29:11.018311   11129 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.3" does not exist at hash "aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c" in container runtime
	I1210 05:29:11.018361   11129 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:11.018422   11129 ssh_runner.go:195] Run: which crictl
	I1210 05:29:11.019722   11129 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.3" does not exist at hash "aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78" in container runtime
	I1210 05:29:11.019768   11129 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:11.019825   11129 ssh_runner.go:195] Run: which crictl
	I1210 05:29:11.031449   11129 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.3" does not exist at hash "5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942" in container runtime
	I1210 05:29:11.031474   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:11.031489   11129 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:11.031533   11129 ssh_runner.go:195] Run: which crictl
	I1210 05:29:11.039342   11129 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.3" needs transfer: "registry.k8s.io/kube-proxy:v1.34.3" does not exist at hash "36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691" in container runtime
	I1210 05:29:11.039377   11129 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:11.039376   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:11.039417   11129 ssh_runner.go:195] Run: which crictl
	I1210 05:29:11.046905   11129 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1210 05:29:11.046927   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:11.046942   11129 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1210 05:29:11.046977   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:11.046980   11129 ssh_runner.go:195] Run: which crictl
	I1210 05:29:11.060502   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:11.060520   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:11.067968   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:11.068004   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:11.075238   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:11.075347   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 05:29:11.077387   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:11.096748   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:11.096748   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:11.100069   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:11.100100   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:11.111659   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 05:29:11.111763   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:11.113783   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:11.131445   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:11.131466   11129 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1210 05:29:11.131554   11129 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1210 05:29:11.134781   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:11.134843   11129 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0
	I1210 05:29:11.134920   11129 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1210 05:29:11.141276   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 05:29:11.147599   11129 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3
	I1210 05:29:11.147614   11129 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3
	I1210 05:29:11.147692   11129 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 05:29:11.147706   11129 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 05:29:11.160289   11129 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3
	I1210 05:29:11.160347   11129 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1210 05:29:11.160377   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1210 05:29:11.160391   11129 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 05:29:11.171550   11129 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1210 05:29:11.171586   11129 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3
	I1210 05:29:11.171671   11129 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 05:29:11.171585   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1210 05:29:11.176025   11129 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.3': No such file or directory
	I1210 05:29:11.176053   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 --> /var/lib/minikube/images/kube-scheduler_v1.34.3 (17393664 bytes)
	I1210 05:29:11.176147   11129 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.3': No such file or directory
	I1210 05:29:11.176178   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 --> /var/lib/minikube/images/kube-apiserver_v1.34.3 (27075584 bytes)
	I1210 05:29:11.176183   11129 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.3': No such file or directory
	I1210 05:29:11.176158   11129 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1210 05:29:11.176207   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 --> /var/lib/minikube/images/kube-controller-manager_v1.34.3 (22830080 bytes)
	I1210 05:29:11.176259   11129 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 05:29:11.223615   11129 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.3': No such file or directory
	I1210 05:29:11.223640   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 --> /var/lib/minikube/images/kube-proxy_v1.34.3 (25966592 bytes)
	I1210 05:29:11.225866   11129 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 05:29:11.225891   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1210 05:29:11.228053   11129 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:11.286037   11129 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 05:29:11.286113   11129 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1210 05:29:11.312547   11129 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1210 05:29:11.312592   11129 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:11.312642   11129 ssh_runner.go:195] Run: which crictl
	I1210 05:29:11.669880   11129 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1210 05:29:11.669918   11129 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1210 05:29:11.669955   11129 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1210 05:29:11.669966   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:12.884833   11129 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.214855855s)
	I1210 05:29:12.884857   11129 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1210 05:29:12.884876   11129 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 05:29:12.884878   11129 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.214889793s)
	I1210 05:29:12.884910   11129 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 05:29:12.884945   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:13.874231   11129 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 from cache
	I1210 05:29:13.874238   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:13.874268   11129 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1210 05:29:13.874303   11129 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1210 05:29:13.901143   11129 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 05:29:13.901236   11129 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 05:29:15.073710   11129 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.172450757s)
	I1210 05:29:15.073744   11129 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 05:29:15.073762   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1210 05:29:15.073712   11129 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (1.199376609s)
	I1210 05:29:15.073839   11129 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1210 05:29:15.073869   11129 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 05:29:15.073923   11129 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 05:29:16.369016   11129 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.3: (1.295070644s)
	I1210 05:29:16.369040   11129 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 from cache
	I1210 05:29:16.369060   11129 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 05:29:16.369133   11129 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 05:29:17.459526   11129 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.3: (1.090365034s)
	I1210 05:29:17.459558   11129 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 from cache
	I1210 05:29:17.459587   11129 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 05:29:17.459631   11129 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 05:29:18.475563   11129 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.3: (1.015910767s)
	I1210 05:29:18.475588   11129 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 from cache
	I1210 05:29:18.475615   11129 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 05:29:18.475657   11129 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1210 05:29:18.974434   11129 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 05:29:18.974481   11129 cache_images.go:125] Successfully loaded all cached images
	I1210 05:29:18.974488   11129 cache_images.go:94] duration metric: took 8.163287278s to LoadCachedImages
	I1210 05:29:18.974503   11129 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.3 crio true true} ...
	I1210 05:29:18.974592   11129 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-193927 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:addons-193927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 05:29:18.974698   11129 ssh_runner.go:195] Run: crio config
	I1210 05:29:19.018535   11129 cni.go:84] Creating CNI manager for ""
	I1210 05:29:19.018554   11129 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 05:29:19.018571   11129 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 05:29:19.018595   11129 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-193927 NodeName:addons-193927 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 05:29:19.018706   11129 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-193927"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 05:29:19.018768   11129 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1210 05:29:19.026566   11129 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.3': No such file or directory
	
	Initiating transfer...
	I1210 05:29:19.026618   11129 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.3
	I1210 05:29:19.034020   11129 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl.sha256
	I1210 05:29:19.034050   11129 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubelet.sha256
	I1210 05:29:19.034046   11129 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:29:19.034115   11129 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubectl
	I1210 05:29:19.034124   11129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 05:29:19.034167   11129 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubeadm
	I1210 05:29:19.037710   11129 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubectl': No such file or directory
	I1210 05:29:19.037733   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/linux/amd64/v1.34.3/kubectl --> /var/lib/minikube/binaries/v1.34.3/kubectl (60563640 bytes)
	I1210 05:29:19.038298   11129 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubeadm': No such file or directory
	I1210 05:29:19.038321   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/linux/amd64/v1.34.3/kubeadm --> /var/lib/minikube/binaries/v1.34.3/kubeadm (74027192 bytes)
	I1210 05:29:19.053892   11129 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubelet
	I1210 05:29:19.090636   11129 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubelet': No such file or directory
	I1210 05:29:19.090671   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/linux/amd64/v1.34.3/kubelet --> /var/lib/minikube/binaries/v1.34.3/kubelet (59203876 bytes)
	I1210 05:29:19.505163   11129 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 05:29:19.512360   11129 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1210 05:29:19.523634   11129 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 05:29:19.537516   11129 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1210 05:29:19.548610   11129 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 05:29:19.551617   11129 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 05:29:19.560340   11129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:29:19.635784   11129 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:29:19.659025   11129 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927 for IP: 192.168.49.2
	I1210 05:29:19.659044   11129 certs.go:195] generating shared ca certs ...
	I1210 05:29:19.659063   11129 certs.go:227] acquiring lock for ca certs: {Name:mka90f54d579d39a8508aa46a6cef002ccad5d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:19.659244   11129 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key
	I1210 05:29:19.793147   11129 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt ...
	I1210 05:29:19.793181   11129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt: {Name:mkc0d0f92e95d60b30ec1dbf56195b2dda84cffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:19.793350   11129 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key ...
	I1210 05:29:19.793366   11129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key: {Name:mkcb24c7e12076b8d17133f829204e050e518554 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:19.793470   11129 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key
	I1210 05:29:19.822657   11129 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.crt ...
	I1210 05:29:19.822678   11129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.crt: {Name:mkc9fb3c2bc5b72aa1ea9c45f23c0f33021a2b12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:19.822824   11129 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key ...
	I1210 05:29:19.822838   11129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key: {Name:mkb04095fb63c55d15225717a2eee3c7c5e76061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:19.822927   11129 certs.go:257] generating profile certs ...
	I1210 05:29:19.822997   11129 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.key
	I1210 05:29:19.823015   11129 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.crt with IP's: []
	I1210 05:29:19.867323   11129 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.crt ...
	I1210 05:29:19.867341   11129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.crt: {Name:mk01319d2752e614055082ddab1c9e855df1f14e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:19.867470   11129 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.key ...
	I1210 05:29:19.867483   11129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.key: {Name:mk35a3912ad5f367c88e2a7048f8fec25874ffac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:19.867574   11129 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/apiserver.key.7daa7a3e
	I1210 05:29:19.867596   11129 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/apiserver.crt.7daa7a3e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1210 05:29:19.975099   11129 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/apiserver.crt.7daa7a3e ...
	I1210 05:29:19.975122   11129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/apiserver.crt.7daa7a3e: {Name:mkbffd692b7f8649db24e2e6cd07451c5634743b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:19.975243   11129 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/apiserver.key.7daa7a3e ...
	I1210 05:29:19.975256   11129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/apiserver.key.7daa7a3e: {Name:mk0f10262d3b363611bb28322382111b525ec8f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:19.975321   11129 certs.go:382] copying /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/apiserver.crt.7daa7a3e -> /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/apiserver.crt
	I1210 05:29:19.975391   11129 certs.go:386] copying /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/apiserver.key.7daa7a3e -> /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/apiserver.key
	I1210 05:29:19.975437   11129 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/proxy-client.key
	I1210 05:29:19.975453   11129 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/proxy-client.crt with IP's: []
	I1210 05:29:20.177289   11129 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/proxy-client.crt ...
	I1210 05:29:20.177312   11129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/proxy-client.crt: {Name:mk3b440df824a859a3d6377a95acc4bb2c2ea5fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:20.177474   11129 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/proxy-client.key ...
	I1210 05:29:20.177485   11129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/proxy-client.key: {Name:mkf17594af389a3170388dd608c101b0e689cce9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:20.177664   11129 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 05:29:20.177705   11129 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem (1078 bytes)
	I1210 05:29:20.177737   11129 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem (1123 bytes)
	I1210 05:29:20.177761   11129 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem (1679 bytes)
	I1210 05:29:20.178355   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 05:29:20.194906   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 05:29:20.210550   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 05:29:20.225947   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 05:29:20.241362   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 05:29:20.256890   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 05:29:20.272184   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 05:29:20.287280   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 05:29:20.302740   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 05:29:20.319943   11129 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 05:29:20.331150   11129 ssh_runner.go:195] Run: openssl version
	I1210 05:29:20.336590   11129 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:29:20.342971   11129 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 05:29:20.351722   11129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:29:20.354960   11129 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:29:20.355003   11129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:29:20.389415   11129 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 05:29:20.397123   11129 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 05:29:20.405555   11129 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 05:29:20.408849   11129 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 05:29:20.408899   11129 kubeadm.go:401] StartCluster: {Name:addons-193927 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-193927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:29:20.408969   11129 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:29:20.409033   11129 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:29:20.432690   11129 cri.go:89] found id: ""
	I1210 05:29:20.432737   11129 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 05:29:20.439620   11129 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 05:29:20.446705   11129 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 05:29:20.446742   11129 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 05:29:20.453499   11129 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 05:29:20.453520   11129 kubeadm.go:158] found existing configuration files:
	
	I1210 05:29:20.453548   11129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 05:29:20.460201   11129 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 05:29:20.460263   11129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 05:29:20.466822   11129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 05:29:20.473463   11129 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 05:29:20.473494   11129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 05:29:20.479959   11129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 05:29:20.486644   11129 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 05:29:20.486687   11129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 05:29:20.493040   11129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 05:29:20.499631   11129 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 05:29:20.499663   11129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 05:29:20.506122   11129 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 05:29:20.558111   11129 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1210 05:29:20.610165   11129 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 05:29:30.249205   11129 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1210 05:29:30.249298   11129 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 05:29:30.249409   11129 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 05:29:30.249478   11129 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1210 05:29:30.249520   11129 kubeadm.go:319] OS: Linux
	I1210 05:29:30.249565   11129 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 05:29:30.249607   11129 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 05:29:30.249674   11129 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 05:29:30.249752   11129 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 05:29:30.249822   11129 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 05:29:30.249890   11129 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 05:29:30.249957   11129 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 05:29:30.250018   11129 kubeadm.go:319] CGROUPS_IO: enabled
	I1210 05:29:30.250142   11129 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 05:29:30.250278   11129 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 05:29:30.250404   11129 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 05:29:30.250493   11129 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 05:29:30.251846   11129 out.go:252]   - Generating certificates and keys ...
	I1210 05:29:30.251907   11129 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 05:29:30.251982   11129 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 05:29:30.252054   11129 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 05:29:30.252141   11129 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 05:29:30.252206   11129 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 05:29:30.252251   11129 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 05:29:30.252307   11129 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 05:29:30.252421   11129 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-193927 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1210 05:29:30.252467   11129 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 05:29:30.252568   11129 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-193927 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1210 05:29:30.252630   11129 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 05:29:30.252695   11129 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 05:29:30.252744   11129 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 05:29:30.252788   11129 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 05:29:30.252833   11129 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 05:29:30.252880   11129 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 05:29:30.252922   11129 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 05:29:30.252977   11129 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 05:29:30.253065   11129 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 05:29:30.253194   11129 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 05:29:30.253278   11129 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 05:29:30.255254   11129 out.go:252]   - Booting up control plane ...
	I1210 05:29:30.255335   11129 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 05:29:30.255400   11129 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 05:29:30.255463   11129 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 05:29:30.255548   11129 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 05:29:30.255629   11129 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 05:29:30.255715   11129 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 05:29:30.255796   11129 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 05:29:30.255838   11129 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 05:29:30.255958   11129 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 05:29:30.256095   11129 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 05:29:30.256185   11129 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000744816s
	I1210 05:29:30.256264   11129 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 05:29:30.256365   11129 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1210 05:29:30.256449   11129 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 05:29:30.256522   11129 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 05:29:30.256591   11129 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004261137s
	I1210 05:29:30.256644   11129 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.698527335s
	I1210 05:29:30.256701   11129 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501267806s
	I1210 05:29:30.256809   11129 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 05:29:30.256935   11129 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 05:29:30.256989   11129 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 05:29:30.257165   11129 kubeadm.go:319] [mark-control-plane] Marking the node addons-193927 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 05:29:30.257215   11129 kubeadm.go:319] [bootstrap-token] Using token: tjsxdu.6ugkds5uf0q4rr7i
	I1210 05:29:30.258289   11129 out.go:252]   - Configuring RBAC rules ...
	I1210 05:29:30.258386   11129 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 05:29:30.258472   11129 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 05:29:30.258583   11129 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 05:29:30.258683   11129 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 05:29:30.258777   11129 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 05:29:30.258845   11129 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 05:29:30.258954   11129 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 05:29:30.259008   11129 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 05:29:30.259074   11129 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 05:29:30.259104   11129 kubeadm.go:319] 
	I1210 05:29:30.259189   11129 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 05:29:30.259203   11129 kubeadm.go:319] 
	I1210 05:29:30.259315   11129 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 05:29:30.259324   11129 kubeadm.go:319] 
	I1210 05:29:30.259359   11129 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 05:29:30.259443   11129 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 05:29:30.259524   11129 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 05:29:30.259532   11129 kubeadm.go:319] 
	I1210 05:29:30.259602   11129 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 05:29:30.259611   11129 kubeadm.go:319] 
	I1210 05:29:30.259678   11129 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 05:29:30.259687   11129 kubeadm.go:319] 
	I1210 05:29:30.259740   11129 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 05:29:30.259807   11129 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 05:29:30.259871   11129 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 05:29:30.259877   11129 kubeadm.go:319] 
	I1210 05:29:30.259941   11129 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 05:29:30.260006   11129 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 05:29:30.260012   11129 kubeadm.go:319] 
	I1210 05:29:30.260093   11129 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token tjsxdu.6ugkds5uf0q4rr7i \
	I1210 05:29:30.260229   11129 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fec42d2a7c02894c4f889fb8bc31e98283f3b1a3e3609cf9160b0c24109717cc \
	I1210 05:29:30.260276   11129 kubeadm.go:319] 	--control-plane 
	I1210 05:29:30.260290   11129 kubeadm.go:319] 
	I1210 05:29:30.260403   11129 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 05:29:30.260411   11129 kubeadm.go:319] 
	I1210 05:29:30.260479   11129 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token tjsxdu.6ugkds5uf0q4rr7i \
	I1210 05:29:30.260580   11129 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fec42d2a7c02894c4f889fb8bc31e98283f3b1a3e3609cf9160b0c24109717cc 
	I1210 05:29:30.260590   11129 cni.go:84] Creating CNI manager for ""
	I1210 05:29:30.260596   11129 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 05:29:30.261764   11129 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1210 05:29:30.263000   11129 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 05:29:30.266985   11129 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1210 05:29:30.267001   11129 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1210 05:29:30.278871   11129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 05:29:30.468975   11129 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 05:29:30.469093   11129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:30.469095   11129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-193927 minikube.k8s.io/updated_at=2025_12_10T05_29_30_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602 minikube.k8s.io/name=addons-193927 minikube.k8s.io/primary=true
	I1210 05:29:30.478546   11129 ops.go:34] apiserver oom_adj: -16
	I1210 05:29:30.545908   11129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:31.046178   11129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:31.546421   11129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:32.046819   11129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:32.546884   11129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:33.046185   11129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:33.546485   11129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:34.046321   11129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:34.545952   11129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:34.606669   11129 kubeadm.go:1114] duration metric: took 4.137641273s to wait for elevateKubeSystemPrivileges
	I1210 05:29:34.606706   11129 kubeadm.go:403] duration metric: took 14.197810043s to StartCluster
	I1210 05:29:34.606726   11129 settings.go:142] acquiring lock: {Name:mk8c38e27b37253ca8cb2a2adf6342f0db270902 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:34.606842   11129 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 05:29:34.607233   11129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/kubeconfig: {Name:mkfa60e97179780abb80cddf99075aed36301884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:34.607431   11129 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 05:29:34.607451   11129 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 05:29:34.607512   11129 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1210 05:29:34.607651   11129 addons.go:70] Setting yakd=true in profile "addons-193927"
	I1210 05:29:34.607668   11129 addons.go:70] Setting ingress-dns=true in profile "addons-193927"
	I1210 05:29:34.607687   11129 addons.go:70] Setting inspektor-gadget=true in profile "addons-193927"
	I1210 05:29:34.607683   11129 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-193927"
	I1210 05:29:34.607678   11129 addons.go:239] Setting addon yakd=true in "addons-193927"
	I1210 05:29:34.607705   11129 addons.go:70] Setting metrics-server=true in profile "addons-193927"
	I1210 05:29:34.607701   11129 addons.go:70] Setting gcp-auth=true in profile "addons-193927"
	I1210 05:29:34.607718   11129 addons.go:239] Setting addon metrics-server=true in "addons-193927"
	I1210 05:29:34.607726   11129 mustload.go:66] Loading cluster: addons-193927
	I1210 05:29:34.607743   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:34.607747   11129 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-193927"
	I1210 05:29:34.607750   11129 addons.go:70] Setting ingress=true in profile "addons-193927"
	I1210 05:29:34.607762   11129 addons.go:239] Setting addon ingress=true in "addons-193927"
	I1210 05:29:34.607760   11129 addons.go:70] Setting storage-provisioner=true in profile "addons-193927"
	I1210 05:29:34.607773   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:34.607778   11129 addons.go:239] Setting addon storage-provisioner=true in "addons-193927"
	I1210 05:29:34.607784   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:34.607800   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:34.607802   11129 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-193927"
	I1210 05:29:34.607820   11129 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-193927"
	I1210 05:29:34.607844   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:34.607846   11129 addons.go:70] Setting default-storageclass=true in profile "addons-193927"
	I1210 05:29:34.607861   11129 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-193927"
	I1210 05:29:34.607919   11129 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:29:34.608156   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.608216   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.608266   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.608268   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.608277   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.608281   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.608314   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.608370   11129 addons.go:70] Setting registry=true in profile "addons-193927"
	I1210 05:29:34.608389   11129 addons.go:239] Setting addon registry=true in "addons-193927"
	I1210 05:29:34.608417   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:34.608836   11129 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-193927"
	I1210 05:29:34.608857   11129 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-193927"
	I1210 05:29:34.608882   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:34.608894   11129 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-193927"
	I1210 05:29:34.608913   11129 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-193927"
	I1210 05:29:34.609202   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.607740   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:34.609346   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.609561   11129 out.go:179] * Verifying Kubernetes components...
	I1210 05:29:34.609644   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.610073   11129 addons.go:70] Setting volumesnapshots=true in profile "addons-193927"
	I1210 05:29:34.610113   11129 addons.go:239] Setting addon volumesnapshots=true in "addons-193927"
	I1210 05:29:34.610138   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:34.610663   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.619344   11129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:29:34.619729   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.607690   11129 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:29:34.607699   11129 addons.go:239] Setting addon ingress-dns=true in "addons-193927"
	I1210 05:29:34.620021   11129 addons.go:70] Setting cloud-spanner=true in profile "addons-193927"
	I1210 05:29:34.620024   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:34.620040   11129 addons.go:239] Setting addon cloud-spanner=true in "addons-193927"
	I1210 05:29:34.620097   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:34.607698   11129 addons.go:239] Setting addon inspektor-gadget=true in "addons-193927"
	I1210 05:29:34.620391   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:34.620417   11129 addons.go:70] Setting registry-creds=true in profile "addons-193927"
	I1210 05:29:34.620436   11129 addons.go:239] Setting addon registry-creds=true in "addons-193927"
	I1210 05:29:34.620461   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:34.620551   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.620858   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.620905   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.608882   11129 addons.go:70] Setting volcano=true in profile "addons-193927"
	I1210 05:29:34.625203   11129 addons.go:239] Setting addon volcano=true in "addons-193927"
	I1210 05:29:34.625244   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:34.625491   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.625651   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.643859   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:34.667498   11129 addons.go:239] Setting addon default-storageclass=true in "addons-193927"
	I1210 05:29:34.667607   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:34.668251   11129 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1210 05:29:34.668343   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.668378   11129 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1210 05:29:34.669381   11129 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1210 05:29:34.669397   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1210 05:29:34.669445   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:34.671164   11129 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1210 05:29:34.673586   11129 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1210 05:29:34.674484   11129 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1210 05:29:34.674494   11129 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 05:29:34.674536   11129 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 05:29:34.674588   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:34.676283   11129 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1210 05:29:34.677691   11129 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1210 05:29:34.680581   11129 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1210 05:29:34.681657   11129 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1210 05:29:34.684470   11129 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1210 05:29:34.686101   11129 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1210 05:29:34.686116   11129 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1210 05:29:34.686203   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:34.686306   11129 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1210 05:29:34.687468   11129 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1210 05:29:34.687666   11129 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1210 05:29:34.687678   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1210 05:29:34.687731   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:34.688924   11129 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1210 05:29:34.688940   11129 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1210 05:29:34.688991   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:34.695343   11129 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1210 05:29:34.699280   11129 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1210 05:29:34.699299   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1210 05:29:34.699355   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	W1210 05:29:34.706425   11129 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1210 05:29:34.711509   11129 out.go:179]   - Using image docker.io/registry:3.0.0
	I1210 05:29:34.713505   11129 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1210 05:29:34.714641   11129 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1210 05:29:34.714656   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1210 05:29:34.714714   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:34.720191   11129 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1210 05:29:34.721442   11129 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 05:29:34.722429   11129 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 05:29:34.723646   11129 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1210 05:29:34.723781   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1210 05:29:34.723905   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:34.728242   11129 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:34.729702   11129 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:29:34.729827   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 05:29:34.730042   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:34.737112   11129 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1210 05:29:34.738833   11129 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1210 05:29:34.738958   11129 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1210 05:29:34.739119   11129 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1210 05:29:34.739134   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1210 05:29:34.739288   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:34.740105   11129 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1210 05:29:34.740121   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1210 05:29:34.740299   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:34.740643   11129 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1210 05:29:34.740658   11129 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1210 05:29:34.740705   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:34.740386   11129 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-193927"
	I1210 05:29:34.740864   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:34.741974   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.743975   11129 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1210 05:29:34.745523   11129 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1210 05:29:34.745575   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1210 05:29:34.745683   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:34.749332   11129 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 05:29:34.773162   11129 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 05:29:34.773185   11129 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 05:29:34.773273   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:34.774130   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:34.787858   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:34.797846   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:34.798426   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:34.798989   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:34.799408   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:34.799890   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:34.804647   11129 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1210 05:29:34.806350   11129 out.go:179]   - Using image docker.io/busybox:stable
	I1210 05:29:34.807370   11129 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1210 05:29:34.807388   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1210 05:29:34.807447   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:34.808760   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:34.818890   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:34.827054   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:34.834299   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:34.835161   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:34.835668   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:34.839036   11129 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:29:34.839247   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	W1210 05:29:34.854872   11129 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1210 05:29:34.854926   11129 retry.go:31] will retry after 368.340878ms: ssh: handshake failed: EOF
	I1210 05:29:34.855729   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	W1210 05:29:34.857363   11129 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1210 05:29:34.857528   11129 retry.go:31] will retry after 193.849913ms: ssh: handshake failed: EOF
	I1210 05:29:34.964149   11129 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 05:29:34.964483   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1210 05:29:34.966351   11129 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1210 05:29:34.966450   11129 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1210 05:29:34.976133   11129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1210 05:29:34.976134   11129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1210 05:29:34.987493   11129 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 05:29:34.987516   11129 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 05:29:34.990032   11129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1210 05:29:34.991198   11129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1210 05:29:34.995572   11129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1210 05:29:34.996004   11129 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1210 05:29:34.996025   11129 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1210 05:29:34.999253   11129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1210 05:29:35.003076   11129 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1210 05:29:35.003105   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1210 05:29:35.004137   11129 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1210 05:29:35.004155   11129 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1210 05:29:35.014274   11129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:29:35.033366   11129 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1210 05:29:35.033398   11129 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1210 05:29:35.038460   11129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:29:35.043396   11129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1210 05:29:35.049204   11129 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 05:29:35.049227   11129 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 05:29:35.081756   11129 node_ready.go:35] waiting up to 6m0s for node "addons-193927" to be "Ready" ...
	I1210 05:29:35.082016   11129 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1210 05:29:35.083035   11129 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1210 05:29:35.083054   11129 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1210 05:29:35.086459   11129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1210 05:29:35.102428   11129 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1210 05:29:35.102451   11129 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1210 05:29:35.118435   11129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 05:29:35.146561   11129 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1210 05:29:35.146590   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1210 05:29:35.156966   11129 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1210 05:29:35.156997   11129 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1210 05:29:35.207664   11129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1210 05:29:35.215348   11129 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1210 05:29:35.215376   11129 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1210 05:29:35.283865   11129 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1210 05:29:35.283896   11129 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1210 05:29:35.311120   11129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1210 05:29:35.348796   11129 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1210 05:29:35.349157   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1210 05:29:35.395641   11129 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1210 05:29:35.395690   11129 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1210 05:29:35.435522   11129 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1210 05:29:35.435542   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1210 05:29:35.482207   11129 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1210 05:29:35.482303   11129 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1210 05:29:35.489963   11129 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1210 05:29:35.490027   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1210 05:29:35.526187   11129 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1210 05:29:35.526212   11129 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1210 05:29:35.527052   11129 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1210 05:29:35.527073   11129 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1210 05:29:35.580257   11129 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1210 05:29:35.580297   11129 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1210 05:29:35.586760   11129 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-193927" context rescaled to 1 replicas
	I1210 05:29:35.590211   11129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1210 05:29:35.608206   11129 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1210 05:29:35.608233   11129 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1210 05:29:35.638264   11129 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 05:29:35.638301   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1210 05:29:35.680874   11129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 05:29:36.220064   11129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.22881525s)
	I1210 05:29:36.220169   11129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.230098573s)
	I1210 05:29:36.220192   11129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.224593126s)
	I1210 05:29:36.220199   11129 addons.go:495] Verifying addon ingress=true in "addons-193927"
	I1210 05:29:36.220242   11129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.220969444s)
	I1210 05:29:36.220326   11129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.206025779s)
	I1210 05:29:36.220374   11129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.181892806s)
	I1210 05:29:36.220499   11129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.177078114s)
	I1210 05:29:36.220593   11129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.134107135s)
	I1210 05:29:36.220622   11129 addons.go:495] Verifying addon registry=true in "addons-193927"
	I1210 05:29:36.220698   11129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.102228073s)
	I1210 05:29:36.220721   11129 addons.go:495] Verifying addon metrics-server=true in "addons-193927"
	I1210 05:29:36.220780   11129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.013083589s)
	I1210 05:29:36.223161   11129 out.go:179] * Verifying ingress addon...
	I1210 05:29:36.223199   11129 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-193927 service yakd-dashboard -n yakd-dashboard
	
	I1210 05:29:36.223325   11129 out.go:179] * Verifying registry addon...
	I1210 05:29:36.225052   11129 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1210 05:29:36.228289   11129 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1210 05:29:36.234367   11129 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1210 05:29:36.234395   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:36.235112   11129 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	W1210 05:29:36.235317   11129 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1210 05:29:36.542985   11129 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-193927"
	I1210 05:29:36.545212   11129 out.go:179] * Verifying csi-hostpath-driver addon...
	I1210 05:29:36.548136   11129 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1210 05:29:36.551477   11129 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1210 05:29:36.551495   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:36.728684   11129 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1210 05:29:36.728708   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:36.730669   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:36.950600   11129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.26967317s)
	W1210 05:29:36.950649   11129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1210 05:29:36.950672   11129 retry.go:31] will retry after 319.039602ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1210 05:29:37.051332   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:29:37.084499   11129 node_ready.go:57] node "addons-193927" has "Ready":"False" status (will retry)
	I1210 05:29:37.228328   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:37.230238   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:37.270866   11129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 05:29:37.550900   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:37.727633   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:37.730228   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:38.051546   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:38.227799   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:38.230339   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:38.550326   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:38.727623   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:38.729999   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:39.051637   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:39.227479   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:39.230911   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:39.550361   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:29:39.584155   11129 node_ready.go:57] node "addons-193927" has "Ready":"False" status (will retry)
	I1210 05:29:39.694664   11129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.423758478s)
	I1210 05:29:39.728402   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:39.730099   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:40.051157   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:40.228307   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:40.230211   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:40.550898   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:40.728701   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:40.730133   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:41.051352   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:41.227909   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:41.230390   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:41.550745   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:29:41.584649   11129 node_ready.go:57] node "addons-193927" has "Ready":"False" status (will retry)
	I1210 05:29:41.727793   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:41.730348   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:42.051068   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:42.228567   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:42.229952   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:42.260794   11129 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1210 05:29:42.260858   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:42.278101   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:42.386960   11129 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1210 05:29:42.398736   11129 addons.go:239] Setting addon gcp-auth=true in "addons-193927"
	I1210 05:29:42.398792   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:42.399278   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:42.415667   11129 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1210 05:29:42.415716   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:42.431461   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:42.522647   11129 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 05:29:42.523818   11129 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1210 05:29:42.524862   11129 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1210 05:29:42.524872   11129 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1210 05:29:42.536719   11129 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1210 05:29:42.536735   11129 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1210 05:29:42.548294   11129 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1210 05:29:42.548310   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1210 05:29:42.550969   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:42.559836   11129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1210 05:29:42.728918   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:42.730584   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:42.845443   11129 addons.go:495] Verifying addon gcp-auth=true in "addons-193927"
	I1210 05:29:42.847598   11129 out.go:179] * Verifying gcp-auth addon...
	I1210 05:29:42.849336   11129 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1210 05:29:42.851460   11129 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1210 05:29:42.851478   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:43.051461   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:43.227542   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:43.230141   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:43.352241   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:43.550466   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:43.727988   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:43.730756   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:43.852191   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:44.050878   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:29:44.083645   11129 node_ready.go:57] node "addons-193927" has "Ready":"False" status (will retry)
	I1210 05:29:44.227859   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:44.230354   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:44.351327   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:44.550914   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:44.728118   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:44.730680   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:44.851719   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:45.051405   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:45.228124   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:45.230708   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:45.351887   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:45.551358   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:45.727800   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:45.730353   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:45.851682   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:46.051127   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:29:46.084384   11129 node_ready.go:57] node "addons-193927" has "Ready":"False" status (will retry)
	I1210 05:29:46.228191   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:46.230845   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:46.352490   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:46.551300   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:46.727712   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:46.730507   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:46.851578   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:47.051072   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:47.228622   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:47.230045   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:47.352244   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:47.550943   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:47.728438   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:47.729989   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:47.852223   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:48.050750   11129 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1210 05:29:48.050769   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:48.084173   11129 node_ready.go:49] node "addons-193927" is "Ready"
	I1210 05:29:48.084196   11129 node_ready.go:38] duration metric: took 13.002414459s for node "addons-193927" to be "Ready" ...
	I1210 05:29:48.084207   11129 api_server.go:52] waiting for apiserver process to appear ...
	I1210 05:29:48.084256   11129 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:29:48.097808   11129 api_server.go:72] duration metric: took 13.490322663s to wait for apiserver process to appear ...
	I1210 05:29:48.097831   11129 api_server.go:88] waiting for apiserver healthz status ...
	I1210 05:29:48.097853   11129 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1210 05:29:48.103652   11129 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1210 05:29:48.104664   11129 api_server.go:141] control plane version: v1.34.3
	I1210 05:29:48.104694   11129 api_server.go:131] duration metric: took 6.855343ms to wait for apiserver health ...
	I1210 05:29:48.104704   11129 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 05:29:48.152545   11129 system_pods.go:59] 20 kube-system pods found
	I1210 05:29:48.152598   11129 system_pods.go:61] "amd-gpu-device-plugin-742mx" [8a174135-0be6-4c4b-900b-8903ba2adc24] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1210 05:29:48.152616   11129 system_pods.go:61] "coredns-66bc5c9577-fk5gt" [d1c80236-6e29-4ae6-8ad1-485df1e1bfab] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 05:29:48.152626   11129 system_pods.go:61] "csi-hostpath-attacher-0" [ce71965d-9049-4bf6-bd66-bd98d7a4127a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 05:29:48.152642   11129 system_pods.go:61] "csi-hostpath-resizer-0" [32f70c5b-660b-4628-8d60-0ea70a49b757] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 05:29:48.152653   11129 system_pods.go:61] "csi-hostpathplugin-2wcqc" [a80c278d-1d63-4bd4-b523-62ee1f159b04] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 05:29:48.152659   11129 system_pods.go:61] "etcd-addons-193927" [b3c67699-f7b0-40cc-98cd-c77ea32761b4] Running
	I1210 05:29:48.152665   11129 system_pods.go:61] "kindnet-bbr2p" [f3461857-22c6-4ae5-8b26-76e99f47451a] Running
	I1210 05:29:48.152672   11129 system_pods.go:61] "kube-apiserver-addons-193927" [3e6b034c-b192-45aa-a54f-10f7e28490eb] Running
	I1210 05:29:48.152678   11129 system_pods.go:61] "kube-controller-manager-addons-193927" [4c54a340-100e-4217-85d4-a3b57633f6c3] Running
	I1210 05:29:48.152692   11129 system_pods.go:61] "kube-ingress-dns-minikube" [feacfcc2-2a0d-4dec-b90a-5c4330f27a71] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 05:29:48.152698   11129 system_pods.go:61] "kube-proxy-j2r54" [a1f05555-cdab-40ab-bb48-154e17085601] Running
	I1210 05:29:48.152704   11129 system_pods.go:61] "kube-scheduler-addons-193927" [2a53e46d-574f-4dbb-be04-1350d03488d3] Running
	I1210 05:29:48.152711   11129 system_pods.go:61] "metrics-server-85b7d694d7-xswrz" [c2b984ad-8af5-448a-8db7-1a2a5e4cff81] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 05:29:48.152722   11129 system_pods.go:61] "nvidia-device-plugin-daemonset-zdg7v" [c81ec6a5-eddd-4f24-b3f6-22fedc2f79b1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 05:29:48.152731   11129 system_pods.go:61] "registry-6b586f9694-h4d7x" [273724ba-34a3-45fb-bcdf-0ec690ef2c3d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 05:29:48.152743   11129 system_pods.go:61] "registry-creds-764b6fb674-ghgkh" [9369e244-75eb-4b63-883e-0cb1e1d332eb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 05:29:48.152761   11129 system_pods.go:61] "registry-proxy-jr8xs" [a219a574-aafe-429a-ae24-5e8f21f31910] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 05:29:48.152773   11129 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4ckl2" [5f3ee222-7cbb-4203-8242-d5b455a479c1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:29:48.152785   11129 system_pods.go:61] "snapshot-controller-7d9fbc56b8-v87tq" [aa9e8535-5f58-4fb0-af9b-70424f23d191] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:29:48.152794   11129 system_pods.go:61] "storage-provisioner" [fe12c783-3d73-4b2a-9583-730f2fdba136] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 05:29:48.152806   11129 system_pods.go:74] duration metric: took 48.094327ms to wait for pod list to return data ...
	I1210 05:29:48.152819   11129 default_sa.go:34] waiting for default service account to be created ...
	I1210 05:29:48.159844   11129 default_sa.go:45] found service account: "default"
	I1210 05:29:48.159872   11129 default_sa.go:55] duration metric: took 7.046334ms for default service account to be created ...
	I1210 05:29:48.159885   11129 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 05:29:48.252276   11129 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1210 05:29:48.252301   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:48.252718   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:48.253896   11129 system_pods.go:86] 20 kube-system pods found
	I1210 05:29:48.253929   11129 system_pods.go:89] "amd-gpu-device-plugin-742mx" [8a174135-0be6-4c4b-900b-8903ba2adc24] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1210 05:29:48.253942   11129 system_pods.go:89] "coredns-66bc5c9577-fk5gt" [d1c80236-6e29-4ae6-8ad1-485df1e1bfab] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 05:29:48.253951   11129 system_pods.go:89] "csi-hostpath-attacher-0" [ce71965d-9049-4bf6-bd66-bd98d7a4127a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 05:29:48.253959   11129 system_pods.go:89] "csi-hostpath-resizer-0" [32f70c5b-660b-4628-8d60-0ea70a49b757] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 05:29:48.253971   11129 system_pods.go:89] "csi-hostpathplugin-2wcqc" [a80c278d-1d63-4bd4-b523-62ee1f159b04] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 05:29:48.253977   11129 system_pods.go:89] "etcd-addons-193927" [b3c67699-f7b0-40cc-98cd-c77ea32761b4] Running
	I1210 05:29:48.253988   11129 system_pods.go:89] "kindnet-bbr2p" [f3461857-22c6-4ae5-8b26-76e99f47451a] Running
	I1210 05:29:48.253994   11129 system_pods.go:89] "kube-apiserver-addons-193927" [3e6b034c-b192-45aa-a54f-10f7e28490eb] Running
	I1210 05:29:48.254003   11129 system_pods.go:89] "kube-controller-manager-addons-193927" [4c54a340-100e-4217-85d4-a3b57633f6c3] Running
	I1210 05:29:48.254010   11129 system_pods.go:89] "kube-ingress-dns-minikube" [feacfcc2-2a0d-4dec-b90a-5c4330f27a71] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 05:29:48.254018   11129 system_pods.go:89] "kube-proxy-j2r54" [a1f05555-cdab-40ab-bb48-154e17085601] Running
	I1210 05:29:48.254025   11129 system_pods.go:89] "kube-scheduler-addons-193927" [2a53e46d-574f-4dbb-be04-1350d03488d3] Running
	I1210 05:29:48.254035   11129 system_pods.go:89] "metrics-server-85b7d694d7-xswrz" [c2b984ad-8af5-448a-8db7-1a2a5e4cff81] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 05:29:48.254043   11129 system_pods.go:89] "nvidia-device-plugin-daemonset-zdg7v" [c81ec6a5-eddd-4f24-b3f6-22fedc2f79b1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 05:29:48.254051   11129 system_pods.go:89] "registry-6b586f9694-h4d7x" [273724ba-34a3-45fb-bcdf-0ec690ef2c3d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 05:29:48.254063   11129 system_pods.go:89] "registry-creds-764b6fb674-ghgkh" [9369e244-75eb-4b63-883e-0cb1e1d332eb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 05:29:48.254074   11129 system_pods.go:89] "registry-proxy-jr8xs" [a219a574-aafe-429a-ae24-5e8f21f31910] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 05:29:48.254094   11129 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4ckl2" [5f3ee222-7cbb-4203-8242-d5b455a479c1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:29:48.254107   11129 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v87tq" [aa9e8535-5f58-4fb0-af9b-70424f23d191] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:29:48.254119   11129 system_pods.go:89] "storage-provisioner" [fe12c783-3d73-4b2a-9583-730f2fdba136] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 05:29:48.254137   11129 retry.go:31] will retry after 255.495687ms: missing components: kube-dns
	I1210 05:29:48.352752   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:48.514454   11129 system_pods.go:86] 20 kube-system pods found
	I1210 05:29:48.514498   11129 system_pods.go:89] "amd-gpu-device-plugin-742mx" [8a174135-0be6-4c4b-900b-8903ba2adc24] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1210 05:29:48.514509   11129 system_pods.go:89] "coredns-66bc5c9577-fk5gt" [d1c80236-6e29-4ae6-8ad1-485df1e1bfab] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 05:29:48.514519   11129 system_pods.go:89] "csi-hostpath-attacher-0" [ce71965d-9049-4bf6-bd66-bd98d7a4127a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 05:29:48.514528   11129 system_pods.go:89] "csi-hostpath-resizer-0" [32f70c5b-660b-4628-8d60-0ea70a49b757] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 05:29:48.514544   11129 system_pods.go:89] "csi-hostpathplugin-2wcqc" [a80c278d-1d63-4bd4-b523-62ee1f159b04] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 05:29:48.514550   11129 system_pods.go:89] "etcd-addons-193927" [b3c67699-f7b0-40cc-98cd-c77ea32761b4] Running
	I1210 05:29:48.514556   11129 system_pods.go:89] "kindnet-bbr2p" [f3461857-22c6-4ae5-8b26-76e99f47451a] Running
	I1210 05:29:48.514562   11129 system_pods.go:89] "kube-apiserver-addons-193927" [3e6b034c-b192-45aa-a54f-10f7e28490eb] Running
	I1210 05:29:48.514567   11129 system_pods.go:89] "kube-controller-manager-addons-193927" [4c54a340-100e-4217-85d4-a3b57633f6c3] Running
	I1210 05:29:48.514576   11129 system_pods.go:89] "kube-ingress-dns-minikube" [feacfcc2-2a0d-4dec-b90a-5c4330f27a71] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 05:29:48.514579   11129 system_pods.go:89] "kube-proxy-j2r54" [a1f05555-cdab-40ab-bb48-154e17085601] Running
	I1210 05:29:48.514584   11129 system_pods.go:89] "kube-scheduler-addons-193927" [2a53e46d-574f-4dbb-be04-1350d03488d3] Running
	I1210 05:29:48.514591   11129 system_pods.go:89] "metrics-server-85b7d694d7-xswrz" [c2b984ad-8af5-448a-8db7-1a2a5e4cff81] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 05:29:48.514606   11129 system_pods.go:89] "nvidia-device-plugin-daemonset-zdg7v" [c81ec6a5-eddd-4f24-b3f6-22fedc2f79b1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 05:29:48.514617   11129 system_pods.go:89] "registry-6b586f9694-h4d7x" [273724ba-34a3-45fb-bcdf-0ec690ef2c3d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 05:29:48.514625   11129 system_pods.go:89] "registry-creds-764b6fb674-ghgkh" [9369e244-75eb-4b63-883e-0cb1e1d332eb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 05:29:48.514632   11129 system_pods.go:89] "registry-proxy-jr8xs" [a219a574-aafe-429a-ae24-5e8f21f31910] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 05:29:48.514640   11129 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4ckl2" [5f3ee222-7cbb-4203-8242-d5b455a479c1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:29:48.514649   11129 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v87tq" [aa9e8535-5f58-4fb0-af9b-70424f23d191] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:29:48.514658   11129 system_pods.go:89] "storage-provisioner" [fe12c783-3d73-4b2a-9583-730f2fdba136] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 05:29:48.514678   11129 retry.go:31] will retry after 342.561952ms: missing components: kube-dns
	I1210 05:29:48.551503   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:48.728634   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:48.730689   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:48.852804   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:48.861917   11129 system_pods.go:86] 20 kube-system pods found
	I1210 05:29:48.861954   11129 system_pods.go:89] "amd-gpu-device-plugin-742mx" [8a174135-0be6-4c4b-900b-8903ba2adc24] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1210 05:29:48.861962   11129 system_pods.go:89] "coredns-66bc5c9577-fk5gt" [d1c80236-6e29-4ae6-8ad1-485df1e1bfab] Running
	I1210 05:29:48.861974   11129 system_pods.go:89] "csi-hostpath-attacher-0" [ce71965d-9049-4bf6-bd66-bd98d7a4127a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 05:29:48.861996   11129 system_pods.go:89] "csi-hostpath-resizer-0" [32f70c5b-660b-4628-8d60-0ea70a49b757] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 05:29:48.862009   11129 system_pods.go:89] "csi-hostpathplugin-2wcqc" [a80c278d-1d63-4bd4-b523-62ee1f159b04] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 05:29:48.862014   11129 system_pods.go:89] "etcd-addons-193927" [b3c67699-f7b0-40cc-98cd-c77ea32761b4] Running
	I1210 05:29:48.862022   11129 system_pods.go:89] "kindnet-bbr2p" [f3461857-22c6-4ae5-8b26-76e99f47451a] Running
	I1210 05:29:48.862028   11129 system_pods.go:89] "kube-apiserver-addons-193927" [3e6b034c-b192-45aa-a54f-10f7e28490eb] Running
	I1210 05:29:48.862036   11129 system_pods.go:89] "kube-controller-manager-addons-193927" [4c54a340-100e-4217-85d4-a3b57633f6c3] Running
	I1210 05:29:48.862044   11129 system_pods.go:89] "kube-ingress-dns-minikube" [feacfcc2-2a0d-4dec-b90a-5c4330f27a71] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 05:29:48.862053   11129 system_pods.go:89] "kube-proxy-j2r54" [a1f05555-cdab-40ab-bb48-154e17085601] Running
	I1210 05:29:48.862059   11129 system_pods.go:89] "kube-scheduler-addons-193927" [2a53e46d-574f-4dbb-be04-1350d03488d3] Running
	I1210 05:29:48.862070   11129 system_pods.go:89] "metrics-server-85b7d694d7-xswrz" [c2b984ad-8af5-448a-8db7-1a2a5e4cff81] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 05:29:48.862091   11129 system_pods.go:89] "nvidia-device-plugin-daemonset-zdg7v" [c81ec6a5-eddd-4f24-b3f6-22fedc2f79b1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 05:29:48.862106   11129 system_pods.go:89] "registry-6b586f9694-h4d7x" [273724ba-34a3-45fb-bcdf-0ec690ef2c3d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 05:29:48.862117   11129 system_pods.go:89] "registry-creds-764b6fb674-ghgkh" [9369e244-75eb-4b63-883e-0cb1e1d332eb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 05:29:48.862131   11129 system_pods.go:89] "registry-proxy-jr8xs" [a219a574-aafe-429a-ae24-5e8f21f31910] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 05:29:48.862142   11129 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4ckl2" [5f3ee222-7cbb-4203-8242-d5b455a479c1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:29:48.862154   11129 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v87tq" [aa9e8535-5f58-4fb0-af9b-70424f23d191] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:29:48.862163   11129 system_pods.go:89] "storage-provisioner" [fe12c783-3d73-4b2a-9583-730f2fdba136] Running
	I1210 05:29:48.862175   11129 system_pods.go:126] duration metric: took 702.282552ms to wait for k8s-apps to be running ...
	I1210 05:29:48.862187   11129 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 05:29:48.862238   11129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 05:29:48.879197   11129 system_svc.go:56] duration metric: took 17.001637ms WaitForService to wait for kubelet
	I1210 05:29:48.879234   11129 kubeadm.go:587] duration metric: took 14.271744774s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 05:29:48.879256   11129 node_conditions.go:102] verifying NodePressure condition ...
	I1210 05:29:48.882406   11129 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 05:29:48.882441   11129 node_conditions.go:123] node cpu capacity is 8
	I1210 05:29:48.882459   11129 node_conditions.go:105] duration metric: took 3.198011ms to run NodePressure ...
	I1210 05:29:48.882476   11129 start.go:242] waiting for startup goroutines ...
	I1210 05:29:49.051614   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:49.228358   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:49.230562   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:49.352107   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:49.551726   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:49.728722   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:49.730323   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:49.853805   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:50.053242   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:50.229973   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:50.231896   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:50.352980   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:50.552445   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:50.728333   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:50.730610   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:50.853136   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:51.052329   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:51.228333   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:51.231010   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:51.353061   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:51.552203   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:51.727730   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:51.730451   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:51.851763   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:52.051381   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:52.229054   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:52.231391   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:52.354370   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:52.552591   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:52.731205   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:52.732897   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:52.852912   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:53.052160   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:53.228246   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:53.231418   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:53.352198   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:53.552064   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:53.747367   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:53.747659   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:53.852467   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:54.062448   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:54.228677   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:54.230309   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:54.353479   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:54.551650   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:54.728433   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:54.730573   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:54.852456   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:55.051738   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:55.228319   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:55.230284   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:55.353158   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:55.552387   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:55.728682   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:55.730581   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:55.852645   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:56.051480   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:56.228198   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:56.230821   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:56.352680   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:56.551458   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:56.728310   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:56.730318   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:56.853068   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:57.052528   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:57.228586   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:57.230495   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:57.351885   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:57.551880   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:57.728705   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:57.730235   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:57.853016   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:58.051945   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:58.228297   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:58.231058   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:58.352764   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:58.551728   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:58.730467   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:58.730952   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:58.852276   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:59.052147   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:59.229151   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:59.230925   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:59.352704   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:59.551833   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:59.728778   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:59.730473   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:59.852420   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:00.051639   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:00.229286   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:00.231126   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:00.353496   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:00.551635   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:00.728020   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:00.730659   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:00.852030   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:01.052298   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:01.228971   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:01.231032   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:01.352870   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:01.552056   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:01.729206   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:01.731529   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:01.852375   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:02.051628   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:02.228818   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:02.230680   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:02.352107   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:02.550851   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:02.728645   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:02.730534   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:02.852259   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:03.050921   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:03.228367   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:03.230340   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:03.352894   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:03.552929   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:03.729113   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:03.731045   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:03.853350   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:04.051685   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:04.228568   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:04.230826   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:04.351920   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:04.551450   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:04.727877   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:04.730437   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:04.852245   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:05.051846   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:05.228886   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:05.230636   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:05.352798   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:05.551689   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:05.728467   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:05.730216   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:05.852377   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:06.051174   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:06.227834   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:06.230898   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:06.352650   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:06.552354   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:06.729860   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:06.731733   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:06.852134   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:07.052242   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:07.228758   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:07.230307   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:07.351609   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:07.551794   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:07.728501   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:07.730521   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:07.852263   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:08.051103   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:08.228233   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:08.230642   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:08.351807   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:08.551647   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:08.728693   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:08.730873   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:08.852391   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:09.051353   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:09.227840   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:09.230355   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:09.351480   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:09.551197   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:09.728325   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:09.730112   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:09.852867   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:10.051912   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:10.229959   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:10.231052   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:10.352865   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:10.551713   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:10.728311   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:10.730155   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:10.852948   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:11.052165   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:11.228985   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:11.230930   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:11.352866   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:11.551661   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:11.728309   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:11.730198   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:11.852529   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:12.051439   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:12.228192   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:12.230761   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:12.352612   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:12.552430   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:12.728344   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:12.730175   11129 kapi.go:107] duration metric: took 36.501884683s to wait for kubernetes.io/minikube-addons=registry ...
	I1210 05:30:12.853201   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:13.052019   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:13.229372   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:13.352526   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:13.551003   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:13.728581   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:13.851430   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:14.051198   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:14.228589   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:14.351586   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:14.551541   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:14.727721   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:14.851860   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:15.051307   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:15.227378   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:15.352624   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:15.551006   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:15.728446   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:15.851468   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:16.051305   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:16.228303   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:16.353249   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:16.552012   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:16.729070   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:16.852533   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:17.051366   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:17.229027   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:17.352763   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:17.552156   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:17.729208   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:17.852516   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:18.051635   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:18.228524   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:18.355262   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:18.553660   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:18.729992   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:18.853016   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:19.052308   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:19.228138   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:19.353265   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:19.551421   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:19.730521   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:19.852163   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:20.051149   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:20.228804   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:20.352448   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:20.551925   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:20.728254   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:20.852505   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:21.051607   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:21.228534   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:21.352218   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:21.552153   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:21.729459   11129 kapi.go:107] duration metric: took 45.50440368s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1210 05:30:21.852412   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:22.051396   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:22.352794   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:22.552012   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:22.853173   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:23.054678   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:23.353071   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:23.670867   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:23.851972   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:24.052435   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:24.351928   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:24.552561   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:24.852985   11129 kapi.go:107] duration metric: took 42.003643985s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1210 05:30:24.858215   11129 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-193927 cluster.
	I1210 05:30:24.859612   11129 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1210 05:30:24.860752   11129 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1210 05:30:25.052226   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:25.551209   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:26.051271   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:26.551508   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:27.051618   11129 kapi.go:107] duration metric: took 50.503482475s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1210 05:30:27.053192   11129 out.go:179] * Enabled addons: registry-creds, ingress-dns, amd-gpu-device-plugin, cloud-spanner, nvidia-device-plugin, storage-provisioner, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1210 05:30:27.054164   11129 addons.go:530] duration metric: took 52.446657211s for enable addons: enabled=[registry-creds ingress-dns amd-gpu-device-plugin cloud-spanner nvidia-device-plugin storage-provisioner inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1210 05:30:27.054197   11129 start.go:247] waiting for cluster config update ...
	I1210 05:30:27.054218   11129 start.go:256] writing updated cluster config ...
	I1210 05:30:27.054477   11129 ssh_runner.go:195] Run: rm -f paused
	I1210 05:30:27.058315   11129 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 05:30:27.060760   11129 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fk5gt" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:30:27.064125   11129 pod_ready.go:94] pod "coredns-66bc5c9577-fk5gt" is "Ready"
	I1210 05:30:27.064141   11129 pod_ready.go:86] duration metric: took 3.362812ms for pod "coredns-66bc5c9577-fk5gt" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:30:27.065571   11129 pod_ready.go:83] waiting for pod "etcd-addons-193927" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:30:27.068686   11129 pod_ready.go:94] pod "etcd-addons-193927" is "Ready"
	I1210 05:30:27.068706   11129 pod_ready.go:86] duration metric: took 3.118554ms for pod "etcd-addons-193927" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:30:27.070431   11129 pod_ready.go:83] waiting for pod "kube-apiserver-addons-193927" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:30:27.073663   11129 pod_ready.go:94] pod "kube-apiserver-addons-193927" is "Ready"
	I1210 05:30:27.073679   11129 pod_ready.go:86] duration metric: took 3.231055ms for pod "kube-apiserver-addons-193927" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:30:27.075223   11129 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-193927" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:30:27.461287   11129 pod_ready.go:94] pod "kube-controller-manager-addons-193927" is "Ready"
	I1210 05:30:27.461313   11129 pod_ready.go:86] duration metric: took 386.072735ms for pod "kube-controller-manager-addons-193927" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:30:27.661665   11129 pod_ready.go:83] waiting for pod "kube-proxy-j2r54" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:30:28.061557   11129 pod_ready.go:94] pod "kube-proxy-j2r54" is "Ready"
	I1210 05:30:28.061580   11129 pod_ready.go:86] duration metric: took 399.891967ms for pod "kube-proxy-j2r54" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:30:28.262414   11129 pod_ready.go:83] waiting for pod "kube-scheduler-addons-193927" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:30:28.661214   11129 pod_ready.go:94] pod "kube-scheduler-addons-193927" is "Ready"
	I1210 05:30:28.661240   11129 pod_ready.go:86] duration metric: took 398.800238ms for pod "kube-scheduler-addons-193927" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:30:28.661251   11129 pod_ready.go:40] duration metric: took 1.602910266s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 05:30:28.704459   11129 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1210 05:30:28.705787   11129 out.go:179] * Done! kubectl is now configured to use "addons-193927" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 05:32:02 addons-193927 crio[767]: time="2025-12-10T05:32:02.487449666Z" level=info msg="Pulling image: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=398836d5-5fbc-4c75-8c3d-566dd3f762c4 name=/runtime.v1.ImageService/PullImage
	Dec 10 05:32:02 addons-193927 crio[767]: time="2025-12-10T05:32:02.492151811Z" level=info msg="Trying to access \"docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605\""
	Dec 10 05:32:03 addons-193927 crio[767]: time="2025-12-10T05:32:03.531559806Z" level=info msg="Pulled image: docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=398836d5-5fbc-4c75-8c3d-566dd3f762c4 name=/runtime.v1.ImageService/PullImage
	Dec 10 05:32:03 addons-193927 crio[767]: time="2025-12-10T05:32:03.532000121Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=a377104a-3f45-492c-8690-4d205be4be16 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 05:32:03 addons-193927 crio[767]: time="2025-12-10T05:32:03.564628342Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=ed5e64a5-3414-4403-8470-594810967114 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 05:32:03 addons-193927 crio[767]: time="2025-12-10T05:32:03.568028105Z" level=info msg="Creating container: kube-system/registry-creds-764b6fb674-ghgkh/registry-creds" id=1a9b64de-918b-4667-aa4f-142c0e9913c4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 05:32:03 addons-193927 crio[767]: time="2025-12-10T05:32:03.568181183Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 05:32:03 addons-193927 crio[767]: time="2025-12-10T05:32:03.573315342Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 05:32:03 addons-193927 crio[767]: time="2025-12-10T05:32:03.573884873Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 05:32:03 addons-193927 crio[767]: time="2025-12-10T05:32:03.605601197Z" level=info msg="Created container 75e19d729e94f329191c7d28e09fd0b81b6416f227fbb3cdf06fb4b94ba3b0f4: kube-system/registry-creds-764b6fb674-ghgkh/registry-creds" id=1a9b64de-918b-4667-aa4f-142c0e9913c4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 05:32:03 addons-193927 crio[767]: time="2025-12-10T05:32:03.606146276Z" level=info msg="Starting container: 75e19d729e94f329191c7d28e09fd0b81b6416f227fbb3cdf06fb4b94ba3b0f4" id=96b94b56-245d-4069-953b-fc69674fe0c0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 05:32:03 addons-193927 crio[767]: time="2025-12-10T05:32:03.607869291Z" level=info msg="Started container" PID=9945 containerID=75e19d729e94f329191c7d28e09fd0b81b6416f227fbb3cdf06fb4b94ba3b0f4 description=kube-system/registry-creds-764b6fb674-ghgkh/registry-creds id=96b94b56-245d-4069-953b-fc69674fe0c0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a6e0b35e8b8a25a3e797d4c97e04bf9a2c2f78bb9ea3e21b916520008ee5bcee
	Dec 10 05:33:12 addons-193927 crio[767]: time="2025-12-10T05:33:12.178034498Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-75p4n/POD" id=9dbf7f14-39cb-4bcd-baa4-5a963cfb8462 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 05:33:12 addons-193927 crio[767]: time="2025-12-10T05:33:12.178136029Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 05:33:12 addons-193927 crio[767]: time="2025-12-10T05:33:12.184091581Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-75p4n Namespace:default ID:08d5c1edfa0edc8a17194ebf5fb46260799c4d94b91d6ee34f114d2ac48e5a12 UID:b47e6c46-0662-4887-989f-5c1fb4a6e3e4 NetNS:/var/run/netns/fa60b84c-e8dd-4540-97e3-8bd4c238f0d0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000138b68}] Aliases:map[]}"
	Dec 10 05:33:12 addons-193927 crio[767]: time="2025-12-10T05:33:12.184125035Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-75p4n to CNI network \"kindnet\" (type=ptp)"
	Dec 10 05:33:12 addons-193927 crio[767]: time="2025-12-10T05:33:12.19389295Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-75p4n Namespace:default ID:08d5c1edfa0edc8a17194ebf5fb46260799c4d94b91d6ee34f114d2ac48e5a12 UID:b47e6c46-0662-4887-989f-5c1fb4a6e3e4 NetNS:/var/run/netns/fa60b84c-e8dd-4540-97e3-8bd4c238f0d0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000138b68}] Aliases:map[]}"
	Dec 10 05:33:12 addons-193927 crio[767]: time="2025-12-10T05:33:12.19403269Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-75p4n for CNI network kindnet (type=ptp)"
	Dec 10 05:33:12 addons-193927 crio[767]: time="2025-12-10T05:33:12.194809795Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 05:33:12 addons-193927 crio[767]: time="2025-12-10T05:33:12.19560978Z" level=info msg="Ran pod sandbox 08d5c1edfa0edc8a17194ebf5fb46260799c4d94b91d6ee34f114d2ac48e5a12 with infra container: default/hello-world-app-5d498dc89-75p4n/POD" id=9dbf7f14-39cb-4bcd-baa4-5a963cfb8462 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 05:33:12 addons-193927 crio[767]: time="2025-12-10T05:33:12.196825988Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=8afe59b0-c34f-4be5-bb23-f031d4f61266 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 05:33:12 addons-193927 crio[767]: time="2025-12-10T05:33:12.196943978Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=8afe59b0-c34f-4be5-bb23-f031d4f61266 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 05:33:12 addons-193927 crio[767]: time="2025-12-10T05:33:12.196991661Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=8afe59b0-c34f-4be5-bb23-f031d4f61266 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 05:33:12 addons-193927 crio[767]: time="2025-12-10T05:33:12.197693599Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=0853750b-add3-430e-8963-5ca0c7015c1f name=/runtime.v1.ImageService/PullImage
	Dec 10 05:33:12 addons-193927 crio[767]: time="2025-12-10T05:33:12.201695936Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	75e19d729e94f       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago   Running             registry-creds                           0                   a6e0b35e8b8a2       registry-creds-764b6fb674-ghgkh             kube-system
	c5134dc59b615       public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9                                           2 minutes ago        Running             nginx                                    0                   9e07a5b9d009b       nginx                                       default
	e4a1f95c78379       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago        Running             busybox                                  0                   507e1b547481c       busybox                                     default
	5d4a1d5da42cd       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago        Running             csi-snapshotter                          0                   b106c581aab45       csi-hostpathplugin-2wcqc                    kube-system
	cd1f99729cdad       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago        Running             csi-provisioner                          0                   b106c581aab45       csi-hostpathplugin-2wcqc                    kube-system
	95ca228da5e9f       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago        Running             liveness-probe                           0                   b106c581aab45       csi-hostpathplugin-2wcqc                    kube-system
	252c06e303732       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago        Running             gcp-auth                                 0                   c914ddd9b1e0e       gcp-auth-78565c9fb4-4wmrx                   gcp-auth
	5b9cf05c0ab5e       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago        Running             hostpath                                 0                   b106c581aab45       csi-hostpathplugin-2wcqc                    kube-system
	4717fea92c7df       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             2 minutes ago        Running             controller                               0                   f66ee1021a956       ingress-nginx-controller-85d4c799dd-h6x6q   ingress-nginx
	6c8ffe9e271a1       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago        Running             node-driver-registrar                    0                   b106c581aab45       csi-hostpathplugin-2wcqc                    kube-system
	9484d473adb72       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            2 minutes ago        Running             gadget                                   0                   d8e061ef58dcd       gadget-b2r94                                gadget
	42c477d9d74b6       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago        Running             registry-proxy                           0                   0c65a9b9191ac       registry-proxy-jr8xs                        kube-system
	e4477630dcb98       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago        Running             local-path-provisioner                   0                   758ac4c0239d8       local-path-provisioner-648f6765c9-nkqx4     local-path-storage
	dc84fc8b0d7ae       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago        Running             csi-resizer                              0                   7d92563ce8e11       csi-hostpath-resizer-0                      kube-system
	217c6052689f8       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   d7c3f856bdec9       snapshot-controller-7d9fbc56b8-4ckl2        kube-system
	f02ac84563fd8       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   e6e6f4fe14aac       nvidia-device-plugin-daemonset-zdg7v        kube-system
	c2dd148c15de2       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago        Running             amd-gpu-device-plugin                    0                   2187054c978b0       amd-gpu-device-plugin-742mx                 kube-system
	bd251ebea34ff       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   66cbce320925a       snapshot-controller-7d9fbc56b8-v87tq        kube-system
	976a8b19e2a98       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago        Running             csi-external-health-monitor-controller   0                   b106c581aab45       csi-hostpathplugin-2wcqc                    kube-system
	34f8957b9aa57       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   3 minutes ago        Exited              patch                                    0                   f41b818afd9e4       ingress-nginx-admission-patch-tc5th         ingress-nginx
	7a65eea81e573       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago        Running             csi-attacher                             0                   556c98b0a1c3b       csi-hostpath-attacher-0                     kube-system
	852b508862e49       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   3 minutes ago        Exited              create                                   0                   a5445ea44de2a       ingress-nginx-admission-create-zw7mz        ingress-nginx
	d2d42e5524b3c       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago        Running             yakd                                     0                   cfc23ee8bbafe       yakd-dashboard-5ff678cb9-7nd7x              yakd-dashboard
	05be8bf506f18       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago        Running             registry                                 0                   27434b6a6ee66       registry-6b586f9694-h4d7x                   kube-system
	c2e8fc6eb52c0       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago        Running             minikube-ingress-dns                     0                   da4e0a6e7190a       kube-ingress-dns-minikube                   kube-system
	395c860d8ecd8       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               3 minutes ago        Running             cloud-spanner-emulator                   0                   62ad030fae20a       cloud-spanner-emulator-5bdddb765-2jkx9      default
	cf1e8860d68b3       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago        Running             metrics-server                           0                   c6d2f74cdc828       metrics-server-85b7d694d7-xswrz             kube-system
	3db45466cabf4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago        Running             storage-provisioner                      0                   7558a11f8badc       storage-provisioner                         kube-system
	a56c2752b1ef9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago        Running             coredns                                  0                   fa7ccbb7d155c       coredns-66bc5c9577-fk5gt                    kube-system
	367aea18176f0       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11                                           3 minutes ago        Running             kindnet-cni                              0                   cf2391a3d70e2       kindnet-bbr2p                               kube-system
	206f9657e0226       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                                                             3 minutes ago        Running             kube-proxy                               0                   20c7edea3ad19       kube-proxy-j2r54                            kube-system
	6501c9a3d5552       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                                                             3 minutes ago        Running             kube-scheduler                           0                   8073fc2e65bff       kube-scheduler-addons-193927                kube-system
	0a2be4003b1b3       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                                                             3 minutes ago        Running             kube-apiserver                           0                   e7539e94bcf35       kube-apiserver-addons-193927                kube-system
	3b5e4f42b79e9       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             3 minutes ago        Running             etcd                                     0                   48483d0025ed7       etcd-addons-193927                          kube-system
	b2eb3db5b9910       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                                                             3 minutes ago        Running             kube-controller-manager                  0                   f728cb3f57927       kube-controller-manager-addons-193927       kube-system
	
	
	==> coredns [a56c2752b1ef94dc626cb6a5ebe9da70da07ed988ba80bc8dbc476de7200232b] <==
	[INFO] 10.244.0.22:51781 - 12457 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000129822s
	[INFO] 10.244.0.22:41225 - 55343 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.005268847s
	[INFO] 10.244.0.22:40262 - 1629 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.00631931s
	[INFO] 10.244.0.22:39588 - 29678 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005758401s
	[INFO] 10.244.0.22:39115 - 48800 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.013321955s
	[INFO] 10.244.0.22:52988 - 1128 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006107415s
	[INFO] 10.244.0.22:56820 - 3069 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006264135s
	[INFO] 10.244.0.22:57717 - 53764 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001839231s
	[INFO] 10.244.0.22:46593 - 8361 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002307643s
	[INFO] 10.244.0.25:36555 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000263351s
	[INFO] 10.244.0.25:51165 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000146635s
	[INFO] 10.244.0.31:37729 - 36730 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000265286s
	[INFO] 10.244.0.31:37549 - 31060 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000305411s
	[INFO] 10.244.0.31:35901 - 58334 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000136853s
	[INFO] 10.244.0.31:37414 - 9944 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000188422s
	[INFO] 10.244.0.31:57734 - 63342 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000130175s
	[INFO] 10.244.0.31:38702 - 53565 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000195569s
	[INFO] 10.244.0.31:37502 - 37580 "A IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.004494959s
	[INFO] 10.244.0.31:32994 - 21962 "AAAA IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.007035879s
	[INFO] 10.244.0.31:47871 - 9228 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.003949173s
	[INFO] 10.244.0.31:35652 - 39309 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005152436s
	[INFO] 10.244.0.31:33306 - 23402 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.00354815s
	[INFO] 10.244.0.31:44106 - 782 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.005981225s
	[INFO] 10.244.0.31:56504 - 44585 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001440047s
	[INFO] 10.244.0.31:39264 - 36121 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001506612s
	
	
	==> describe nodes <==
	Name:               addons-193927
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-193927
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602
	                    minikube.k8s.io/name=addons-193927
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T05_29_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-193927
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-193927"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 05:29:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-193927
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 05:33:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 05:32:33 +0000   Wed, 10 Dec 2025 05:29:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 05:32:33 +0000   Wed, 10 Dec 2025 05:29:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 05:32:33 +0000   Wed, 10 Dec 2025 05:29:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 05:32:33 +0000   Wed, 10 Dec 2025 05:29:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-193927
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                62b96902-3e68-44a0-bbf4-5e77aa3a7b36
	  Boot ID:                    b1b789e7-29ca-41f0-9541-8c4ef16372aa
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m44s
	  default                     cloud-spanner-emulator-5bdddb765-2jkx9       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  default                     hello-world-app-5d498dc89-75p4n              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  gadget                      gadget-b2r94                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  gcp-auth                    gcp-auth-78565c9fb4-4wmrx                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m31s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-h6x6q    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         3m37s
	  kube-system                 amd-gpu-device-plugin-742mx                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m26s
	  kube-system                 coredns-66bc5c9577-fk5gt                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m38s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 csi-hostpathplugin-2wcqc                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m26s
	  kube-system                 etcd-addons-193927                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         3m44s
	  kube-system                 kindnet-bbr2p                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m39s
	  kube-system                 kube-apiserver-addons-193927                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 kube-controller-manager-addons-193927        200m (2%)     0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 kube-proxy-j2r54                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 kube-scheduler-addons-193927                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 metrics-server-85b7d694d7-xswrz              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         3m38s
	  kube-system                 nvidia-device-plugin-daemonset-zdg7v         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m26s
	  kube-system                 registry-6b586f9694-h4d7x                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 registry-creds-764b6fb674-ghgkh              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 registry-proxy-jr8xs                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m26s
	  kube-system                 snapshot-controller-7d9fbc56b8-4ckl2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 snapshot-controller-7d9fbc56b8-v87tq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  local-path-storage          local-path-provisioner-648f6765c9-nkqx4      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-7nd7x               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     3m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m37s  kube-proxy       
	  Normal  Starting                 3m44s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m44s  kubelet          Node addons-193927 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m44s  kubelet          Node addons-193927 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m44s  kubelet          Node addons-193927 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m39s  node-controller  Node addons-193927 event: Registered Node addons-193927 in Controller
	  Normal  NodeReady                3m26s  kubelet          Node addons-193927 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.085783] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023769] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.147072] kauditd_printk_skb: 47 callbacks suppressed
	[Dec10 05:30] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 42 ae 2b 34 45 8c c6 70 9d 75 0f 8b 08 00
	[  +1.051409] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 42 ae 2b 34 45 8c c6 70 9d 75 0f 8b 08 00
	[Dec10 05:31] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 42 ae 2b 34 45 8c c6 70 9d 75 0f 8b 08 00
	[  +1.023880] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 42 ae 2b 34 45 8c c6 70 9d 75 0f 8b 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 42 ae 2b 34 45 8c c6 70 9d 75 0f 8b 08 00
	[  +1.023884] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 42 ae 2b 34 45 8c c6 70 9d 75 0f 8b 08 00
	[  +2.047781] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 42 ae 2b 34 45 8c c6 70 9d 75 0f 8b 08 00
	[  +4.031549] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 42 ae 2b 34 45 8c c6 70 9d 75 0f 8b 08 00
	[  +8.447180] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 42 ae 2b 34 45 8c c6 70 9d 75 0f 8b 08 00
	[ +16.382295] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 42 ae 2b 34 45 8c c6 70 9d 75 0f 8b 08 00
	[Dec10 05:32] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 42 ae 2b 34 45 8c c6 70 9d 75 0f 8b 08 00
	
	
	==> etcd [3b5e4f42b79e944fc79e354eea5dfaeef38e9a172b426d0cd69186d52604413a] <==
	{"level":"warn","ts":"2025-12-10T05:29:26.543509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:29:26.550592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:29:26.557149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:29:26.563181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:29:26.569207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:29:26.590241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:29:26.593366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:29:26.605748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:29:26.646540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:29:34.153840Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.057184ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/addons-193927\" limit:1 ","response":"range_response_count:1 size:709"}
	{"level":"info","ts":"2025-12-10T05:29:34.153847Z","caller":"traceutil/trace.go:172","msg":"trace[2028098459] transaction","detail":"{read_only:false; response_revision:295; number_of_response:1; }","duration":"108.766038ms","start":"2025-12-10T05:29:34.045062Z","end":"2025-12-10T05:29:34.153828Z","steps":["trace[2028098459] 'process raft request'  (duration: 45.321423ms)","trace[2028098459] 'compare'  (duration: 63.363517ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T05:29:34.153929Z","caller":"traceutil/trace.go:172","msg":"trace[1316088184] range","detail":"{range_begin:/registry/csinodes/addons-193927; range_end:; response_count:1; response_revision:294; }","duration":"108.157458ms","start":"2025-12-10T05:29:34.045750Z","end":"2025-12-10T05:29:34.153907Z","steps":["trace[1316088184] 'agreement among raft nodes before linearized reading'  (duration: 44.603618ms)","trace[1316088184] 'range keys from in-memory index tree'  (duration: 63.387972ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T05:29:34.284789Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"103.198951ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/root-ca-cert-publisher\" limit:1 ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2025-12-10T05:29:34.284808Z","caller":"traceutil/trace.go:172","msg":"trace[714758576] transaction","detail":"{read_only:false; response_revision:298; number_of_response:1; }","duration":"128.633533ms","start":"2025-12-10T05:29:34.156163Z","end":"2025-12-10T05:29:34.284797Z","steps":["trace[714758576] 'process raft request'  (duration: 124.342694ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:29:34.284862Z","caller":"traceutil/trace.go:172","msg":"trace[1994471047] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/root-ca-cert-publisher; range_end:; response_count:1; response_revision:297; }","duration":"103.275718ms","start":"2025-12-10T05:29:34.181568Z","end":"2025-12-10T05:29:34.284844Z","steps":["trace[1994471047] 'agreement among raft nodes before linearized reading'  (duration: 98.944888ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T05:29:37.444573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:29:37.451244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:30:04.144234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:30:04.152641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:30:04.165585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:30:04.174024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37820","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-10T05:30:23.668673Z","caller":"traceutil/trace.go:172","msg":"trace[1363396482] linearizableReadLoop","detail":"{readStateIndex:1172; appliedIndex:1172; }","duration":"118.354145ms","start":"2025-12-10T05:30:23.550301Z","end":"2025-12-10T05:30:23.668655Z","steps":["trace[1363396482] 'read index received'  (duration: 118.349693ms)","trace[1363396482] 'applied index is now lower than readState.Index'  (duration: 3.851µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T05:30:23.668788Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.472916ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:30:23.668816Z","caller":"traceutil/trace.go:172","msg":"trace[1017580327] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1148; }","duration":"118.515715ms","start":"2025-12-10T05:30:23.550290Z","end":"2025-12-10T05:30:23.668806Z","steps":["trace[1017580327] 'agreement among raft nodes before linearized reading'  (duration: 118.44338ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:30:23.668890Z","caller":"traceutil/trace.go:172","msg":"trace[478720473] transaction","detail":"{read_only:false; response_revision:1149; number_of_response:1; }","duration":"139.362327ms","start":"2025-12-10T05:30:23.529514Z","end":"2025-12-10T05:30:23.668876Z","steps":["trace[478720473] 'process raft request'  (duration: 139.246147ms)"],"step_count":1}
	
	
	==> gcp-auth [252c06e30373275a7c74a3d73ac3e987b7218f6a70e8caf1bdb5e4bebdcd5a85] <==
	2025/12/10 05:30:23 GCP Auth Webhook started!
	2025/12/10 05:30:29 Ready to marshal response ...
	2025/12/10 05:30:29 Ready to write response ...
	2025/12/10 05:30:29 Ready to marshal response ...
	2025/12/10 05:30:29 Ready to write response ...
	2025/12/10 05:30:29 Ready to marshal response ...
	2025/12/10 05:30:29 Ready to write response ...
	2025/12/10 05:30:45 Ready to marshal response ...
	2025/12/10 05:30:45 Ready to write response ...
	2025/12/10 05:30:45 Ready to marshal response ...
	2025/12/10 05:30:45 Ready to write response ...
	2025/12/10 05:30:47 Ready to marshal response ...
	2025/12/10 05:30:47 Ready to write response ...
	2025/12/10 05:30:50 Ready to marshal response ...
	2025/12/10 05:30:50 Ready to write response ...
	2025/12/10 05:30:56 Ready to marshal response ...
	2025/12/10 05:30:56 Ready to write response ...
	2025/12/10 05:30:56 Ready to marshal response ...
	2025/12/10 05:30:56 Ready to write response ...
	2025/12/10 05:31:13 Ready to marshal response ...
	2025/12/10 05:31:13 Ready to write response ...
	2025/12/10 05:33:11 Ready to marshal response ...
	2025/12/10 05:33:11 Ready to write response ...
	
	
	==> kernel <==
	 05:33:13 up 15 min,  0 user,  load average: 0.82, 0.74, 0.37
	Linux addons-193927 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [367aea18176f031be6232fd30a314c767c7759fec05c5e3ffdaf569336ad6525] <==
	I1210 05:31:07.547704       1 main.go:301] handling current node
	I1210 05:31:17.547831       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 05:31:17.547859       1 main.go:301] handling current node
	I1210 05:31:27.545581       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 05:31:27.545703       1 main.go:301] handling current node
	I1210 05:31:37.545240       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 05:31:37.545270       1 main.go:301] handling current node
	I1210 05:31:47.546134       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 05:31:47.547205       1 main.go:301] handling current node
	I1210 05:31:57.554184       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 05:31:57.554211       1 main.go:301] handling current node
	I1210 05:32:07.547516       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 05:32:07.547563       1 main.go:301] handling current node
	I1210 05:32:17.546713       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 05:32:17.546746       1 main.go:301] handling current node
	I1210 05:32:27.545279       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 05:32:27.545311       1 main.go:301] handling current node
	I1210 05:32:37.547676       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 05:32:37.547725       1 main.go:301] handling current node
	I1210 05:32:47.552170       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 05:32:47.552198       1 main.go:301] handling current node
	I1210 05:32:57.549346       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 05:32:57.549393       1 main.go:301] handling current node
	I1210 05:33:07.547385       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 05:33:07.547418       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0a2be4003b1b30e3df7421633de714b1825d05f4ed06a10d8a16f03f12641dd3] <==
	W1210 05:30:02.572575       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 05:30:02.572616       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1210 05:30:02.572632       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1210 05:30:02.572661       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1210 05:30:02.573770       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1210 05:30:04.144155       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1210 05:30:04.152567       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1210 05:30:04.165567       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1210 05:30:04.174003       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1210 05:30:06.588229       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.193.23:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.193.23:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	W1210 05:30:06.588859       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 05:30:06.590687       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1210 05:30:06.608260       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1210 05:30:37.343753       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:50878: use of closed network connection
	E1210 05:30:37.484150       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:50908: use of closed network connection
	I1210 05:30:50.474442       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1210 05:30:50.653572       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.255.119"}
	I1210 05:31:02.345127       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1210 05:33:11.932601       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.34.68"}
	
	
	==> kube-controller-manager [b2eb3db5b9910016ae4a73bcd8196a9aed9e2b0ea078772712f9b76865532a26] <==
	I1210 05:29:34.133035       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1210 05:29:34.133068       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-193927"
	I1210 05:29:34.133122       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1210 05:29:34.133161       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1210 05:29:34.133552       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1210 05:29:34.134048       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1210 05:29:34.134097       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1210 05:29:34.134181       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1210 05:29:34.134199       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1210 05:29:34.136854       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1210 05:29:34.136867       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1210 05:29:34.136925       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1210 05:29:34.136985       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1210 05:29:34.136994       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1210 05:29:34.136999       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1210 05:29:34.138061       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 05:29:34.218457       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-193927" podCIDRs=["10.244.0.0/24"]
	I1210 05:29:49.135029       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1210 05:30:04.135216       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 05:30:04.135997       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1210 05:30:04.136091       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1210 05:30:04.146454       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1210 05:30:04.150824       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1210 05:30:04.237425       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 05:30:04.251603       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [206f9657e022657209d8593c82dd3d5694e511c41253aa91adaa9064170bed8c] <==
	I1210 05:29:35.464303       1 server_linux.go:53] "Using iptables proxy"
	I1210 05:29:35.667727       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 05:29:35.768882       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 05:29:35.768936       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1210 05:29:35.769058       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 05:29:35.955483       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 05:29:35.955542       1 server_linux.go:132] "Using iptables Proxier"
	I1210 05:29:35.998075       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 05:29:36.009184       1 server.go:527] "Version info" version="v1.34.3"
	I1210 05:29:36.012244       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 05:29:36.035954       1 config.go:200] "Starting service config controller"
	I1210 05:29:36.057859       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 05:29:36.041489       1 config.go:106] "Starting endpoint slice config controller"
	I1210 05:29:36.057931       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 05:29:36.041498       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 05:29:36.057944       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 05:29:36.042015       1 config.go:309] "Starting node config controller"
	I1210 05:29:36.057968       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 05:29:36.057974       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 05:29:36.158635       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 05:29:36.158664       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 05:29:36.158678       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6501c9a3d5552292acb572b481eea754ae6f17f2913f63dc303d6291da022ed6] <==
	E1210 05:29:27.035860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 05:29:27.035882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 05:29:27.035918       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 05:29:27.035932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 05:29:27.035977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 05:29:27.035984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 05:29:27.036102       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 05:29:27.036109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 05:29:27.036132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 05:29:27.867863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 05:29:27.977152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 05:29:27.981940       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 05:29:27.991879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 05:29:28.050328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 05:29:28.144770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 05:29:28.168665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 05:29:28.171481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 05:29:28.196414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 05:29:28.209276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 05:29:28.215030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 05:29:28.225985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 05:29:28.359777       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1210 05:29:28.363663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 05:29:28.383623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1210 05:29:30.433634       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 05:31:19 addons-193927 kubelet[2362]: I1210 05:31:19.605535    2362 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^70e09e76-d589-11f0-85e8-6ed01d509c2d" (OuterVolumeSpecName: "task-pv-storage") pod "01d2fe10-7a45-48f5-9121-c9ac98f9339c" (UID: "01d2fe10-7a45-48f5-9121-c9ac98f9339c"). InnerVolumeSpecName "pvc-ab3554ac-de3c-4fe2-b2fc-4ed9cdb7ba7e". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Dec 10 05:31:19 addons-193927 kubelet[2362]: I1210 05:31:19.704168    2362 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-ab3554ac-de3c-4fe2-b2fc-4ed9cdb7ba7e\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^70e09e76-d589-11f0-85e8-6ed01d509c2d\") on node \"addons-193927\" "
	Dec 10 05:31:19 addons-193927 kubelet[2362]: I1210 05:31:19.704192    2362 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7czvj\" (UniqueName: \"kubernetes.io/projected/01d2fe10-7a45-48f5-9121-c9ac98f9339c-kube-api-access-7czvj\") on node \"addons-193927\" DevicePath \"\""
	Dec 10 05:31:19 addons-193927 kubelet[2362]: I1210 05:31:19.711296    2362 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-ab3554ac-de3c-4fe2-b2fc-4ed9cdb7ba7e" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^70e09e76-d589-11f0-85e8-6ed01d509c2d") on node "addons-193927"
	Dec 10 05:31:19 addons-193927 kubelet[2362]: I1210 05:31:19.804635    2362 reconciler_common.go:299] "Volume detached for volume \"pvc-ab3554ac-de3c-4fe2-b2fc-4ed9cdb7ba7e\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^70e09e76-d589-11f0-85e8-6ed01d509c2d\") on node \"addons-193927\" DevicePath \"\""
	Dec 10 05:31:19 addons-193927 kubelet[2362]: I1210 05:31:19.948283    2362 scope.go:117] "RemoveContainer" containerID="108e3a08e77ed599cce35c29b5afbe410d6f02939f0d92f1d65fac75d9a6551a"
	Dec 10 05:31:19 addons-193927 kubelet[2362]: I1210 05:31:19.958110    2362 scope.go:117] "RemoveContainer" containerID="108e3a08e77ed599cce35c29b5afbe410d6f02939f0d92f1d65fac75d9a6551a"
	Dec 10 05:31:19 addons-193927 kubelet[2362]: E1210 05:31:19.958501    2362 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"108e3a08e77ed599cce35c29b5afbe410d6f02939f0d92f1d65fac75d9a6551a\": container with ID starting with 108e3a08e77ed599cce35c29b5afbe410d6f02939f0d92f1d65fac75d9a6551a not found: ID does not exist" containerID="108e3a08e77ed599cce35c29b5afbe410d6f02939f0d92f1d65fac75d9a6551a"
	Dec 10 05:31:19 addons-193927 kubelet[2362]: I1210 05:31:19.958540    2362 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"108e3a08e77ed599cce35c29b5afbe410d6f02939f0d92f1d65fac75d9a6551a"} err="failed to get container status \"108e3a08e77ed599cce35c29b5afbe410d6f02939f0d92f1d65fac75d9a6551a\": rpc error: code = NotFound desc = could not find container \"108e3a08e77ed599cce35c29b5afbe410d6f02939f0d92f1d65fac75d9a6551a\": container with ID starting with 108e3a08e77ed599cce35c29b5afbe410d6f02939f0d92f1d65fac75d9a6551a not found: ID does not exist"
	Dec 10 05:31:21 addons-193927 kubelet[2362]: I1210 05:31:21.468209    2362 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01d2fe10-7a45-48f5-9121-c9ac98f9339c" path="/var/lib/kubelet/pods/01d2fe10-7a45-48f5-9121-c9ac98f9339c/volumes"
	Dec 10 05:31:25 addons-193927 kubelet[2362]: I1210 05:31:25.466458    2362 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-zdg7v" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 05:31:29 addons-193927 kubelet[2362]: I1210 05:31:29.469428    2362 scope.go:117] "RemoveContainer" containerID="657c00e6611d74e35175931d253739adec2216d74be2960d5b6ddd58e0573350"
	Dec 10 05:31:29 addons-193927 kubelet[2362]: I1210 05:31:29.479377    2362 scope.go:117] "RemoveContainer" containerID="e900d011f6fb9cec06364c0ca73f28d9506d595ea216b324eb8ed9f5f809298c"
	Dec 10 05:31:29 addons-193927 kubelet[2362]: I1210 05:31:29.489033    2362 scope.go:117] "RemoveContainer" containerID="4139b671f6c84d94ff0f1c5b44bc5aee917d412a4cd93aa13b8860d989744e22"
	Dec 10 05:31:29 addons-193927 kubelet[2362]: I1210 05:31:29.499849    2362 scope.go:117] "RemoveContainer" containerID="884c3e97f23741cddf6f12382e03b8a68a6f8fc1e3af10036129e00852035bd7"
	Dec 10 05:31:29 addons-193927 kubelet[2362]: I1210 05:31:29.510070    2362 scope.go:117] "RemoveContainer" containerID="db2f386b561a45d95298db0d89dd1028ff8132beafd8d28f1faffdd8e5dd1c87"
	Dec 10 05:31:29 addons-193927 kubelet[2362]: I1210 05:31:29.519095    2362 scope.go:117] "RemoveContainer" containerID="5a286ec27f8856fb63213063f5190fbcb09593bfa822d8f11a87949ce9c38637"
	Dec 10 05:31:37 addons-193927 kubelet[2362]: I1210 05:31:37.465799    2362 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-jr8xs" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 05:31:50 addons-193927 kubelet[2362]: E1210 05:31:50.797756    2362 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-ghgkh" podUID="9369e244-75eb-4b63-883e-0cb1e1d332eb"
	Dec 10 05:32:04 addons-193927 kubelet[2362]: I1210 05:32:04.119360    2362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-ghgkh" podStartSLOduration=148.073665249 podStartE2EDuration="2m29.11933904s" podCreationTimestamp="2025-12-10 05:29:35 +0000 UTC" firstStartedPulling="2025-12-10 05:32:02.48715815 +0000 UTC m=+153.096875396" lastFinishedPulling="2025-12-10 05:32:03.532831925 +0000 UTC m=+154.142549187" observedRunningTime="2025-12-10 05:32:04.117977812 +0000 UTC m=+154.727695099" watchObservedRunningTime="2025-12-10 05:32:04.11933904 +0000 UTC m=+154.729056307"
	Dec 10 05:32:13 addons-193927 kubelet[2362]: I1210 05:32:13.466025    2362 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-742mx" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 05:32:50 addons-193927 kubelet[2362]: I1210 05:32:50.465604    2362 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-jr8xs" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 05:32:53 addons-193927 kubelet[2362]: I1210 05:32:53.466052    2362 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-zdg7v" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 05:33:11 addons-193927 kubelet[2362]: I1210 05:33:11.939470    2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b47e6c46-0662-4887-989f-5c1fb4a6e3e4-gcp-creds\") pod \"hello-world-app-5d498dc89-75p4n\" (UID: \"b47e6c46-0662-4887-989f-5c1fb4a6e3e4\") " pod="default/hello-world-app-5d498dc89-75p4n"
	Dec 10 05:33:11 addons-193927 kubelet[2362]: I1210 05:33:11.939554    2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6smpd\" (UniqueName: \"kubernetes.io/projected/b47e6c46-0662-4887-989f-5c1fb4a6e3e4-kube-api-access-6smpd\") pod \"hello-world-app-5d498dc89-75p4n\" (UID: \"b47e6c46-0662-4887-989f-5c1fb4a6e3e4\") " pod="default/hello-world-app-5d498dc89-75p4n"
	
	
	==> storage-provisioner [3db45466cabf41054b120f3c6070f1ec70a8b2841948afaab355e73b36c7f163] <==
	W1210 05:32:48.874041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:32:50.877210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:32:50.880962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:32:52.883527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:32:52.887902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:32:54.890604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:32:54.893983       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:32:56.896390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:32:56.900152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:32:58.902735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:32:58.905945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:33:00.908631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:33:00.913094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:33:02.915708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:33:02.919252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:33:04.922145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:33:04.925676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:33:06.928274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:33:06.931964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:33:08.935021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:33:08.938369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:33:10.940892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:33:10.945634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:33:12.948705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:33:12.952543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-193927 -n addons-193927
helpers_test.go:270: (dbg) Run:  kubectl --context addons-193927 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: hello-world-app-5d498dc89-75p4n ingress-nginx-admission-create-zw7mz ingress-nginx-admission-patch-tc5th
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-193927 describe pod hello-world-app-5d498dc89-75p4n ingress-nginx-admission-create-zw7mz ingress-nginx-admission-patch-tc5th
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-193927 describe pod hello-world-app-5d498dc89-75p4n ingress-nginx-admission-create-zw7mz ingress-nginx-admission-patch-tc5th: exit status 1 (61.570832ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-75p4n
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-193927/192.168.49.2
	Start Time:       Wed, 10 Dec 2025 05:33:11 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6smpd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6smpd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-75p4n to addons-193927
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	  Normal  Pulled     1s    kubelet            Successfully pulled image "docker.io/kicbase/echo-server:1.0" in 1.263s (1.263s including waiting). Image size: 4944818 bytes.
	  Normal  Created    1s    kubelet            Created container: hello-world-app
	  Normal  Started    1s    kubelet            Started container hello-world-app

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-zw7mz" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-tc5th" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-193927 describe pod hello-world-app-5d498dc89-75p4n ingress-nginx-admission-create-zw7mz ingress-nginx-admission-patch-tc5th: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-193927 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-193927 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (234.510826ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:33:14.288555   26512 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:33:14.288825   26512 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:33:14.288835   26512 out.go:374] Setting ErrFile to fd 2...
	I1210 05:33:14.288840   26512 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:33:14.289005   26512 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 05:33:14.289253   26512 mustload.go:66] Loading cluster: addons-193927
	I1210 05:33:14.289536   26512 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:33:14.289553   26512 addons.go:622] checking whether the cluster is paused
	I1210 05:33:14.289628   26512 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:33:14.289639   26512 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:33:14.289974   26512 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:33:14.307212   26512 ssh_runner.go:195] Run: systemctl --version
	I1210 05:33:14.307265   26512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:33:14.324657   26512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:33:14.420068   26512 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:33:14.420179   26512 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:33:14.447138   26512 cri.go:89] found id: "75e19d729e94f329191c7d28e09fd0b81b6416f227fbb3cdf06fb4b94ba3b0f4"
	I1210 05:33:14.447156   26512 cri.go:89] found id: "5d4a1d5da42cdea143fe7688a27cc37ad2f4a146e885ca2f25810e17c009c709"
	I1210 05:33:14.447162   26512 cri.go:89] found id: "cd1f99729cdad01237d94da575e9488a1f060060c59b7858ae362146b66a5f07"
	I1210 05:33:14.447167   26512 cri.go:89] found id: "95ca228da5e9f4d7e909834091b594c45c7208f8d3b2a571abd619c956f77482"
	I1210 05:33:14.447172   26512 cri.go:89] found id: "5b9cf05c0ab5e38bbe9cfe2273f65fd13711e93896f218f41f79c0660b03dc90"
	I1210 05:33:14.447177   26512 cri.go:89] found id: "6c8ffe9e271a1d0db9a74167b5f966efa0dfe72c5e1662e507abbd8e9663fab6"
	I1210 05:33:14.447186   26512 cri.go:89] found id: "42c477d9d74b6c98a8cb5af1e1f7e3db2b09e988a7fae2733bc43265b154797e"
	I1210 05:33:14.447198   26512 cri.go:89] found id: "dc84fc8b0d7ae154e64ef5052f253bba4217a7a5e867c4712f16ca97cf539e99"
	I1210 05:33:14.447203   26512 cri.go:89] found id: "217c6052689f8c587e315acc25a1b2849ce25e9b39451148233d1f6aa28f814e"
	I1210 05:33:14.447212   26512 cri.go:89] found id: "f02ac84563fd8d04d4258d15032b4be710d23f174fe6977d0c77a2b2231ceb66"
	I1210 05:33:14.447219   26512 cri.go:89] found id: "c2dd148c15de2f4ce8a5067f1648f58cbe34599d18b462157fbe53d635a2ae2d"
	I1210 05:33:14.447225   26512 cri.go:89] found id: "bd251ebea34ff80ac352d3659aca4e9dd92516b5b29e42918a88320e6d6c00a0"
	I1210 05:33:14.447233   26512 cri.go:89] found id: "976a8b19e2a981db8eb4cccab7c5e66c6de34da6ca5d67769e3041ff93464bb0"
	I1210 05:33:14.447238   26512 cri.go:89] found id: "7a65eea81e573477a1e4b111a57afc5d01badf2c22b3244ab34f401df736478b"
	I1210 05:33:14.447243   26512 cri.go:89] found id: "05be8bf506f18516a5e7ba92ec9ee9f1ddb3e678cbc2fbd6fa67ed3d79c01d6f"
	I1210 05:33:14.447266   26512 cri.go:89] found id: "c2e8fc6eb52c03a13e3410eba38a1f93510543ca9cc1f2dce8cf44f724ebb51e"
	I1210 05:33:14.447277   26512 cri.go:89] found id: "cf1e8860d68b3fed3b954f03825d2e52dc0a76a1d91f34d013990bee525f9ba1"
	I1210 05:33:14.447282   26512 cri.go:89] found id: "3db45466cabf41054b120f3c6070f1ec70a8b2841948afaab355e73b36c7f163"
	I1210 05:33:14.447286   26512 cri.go:89] found id: "a56c2752b1ef94dc626cb6a5ebe9da70da07ed988ba80bc8dbc476de7200232b"
	I1210 05:33:14.447302   26512 cri.go:89] found id: "367aea18176f031be6232fd30a314c767c7759fec05c5e3ffdaf569336ad6525"
	I1210 05:33:14.447306   26512 cri.go:89] found id: "206f9657e022657209d8593c82dd3d5694e511c41253aa91adaa9064170bed8c"
	I1210 05:33:14.447310   26512 cri.go:89] found id: "6501c9a3d5552292acb572b481eea754ae6f17f2913f63dc303d6291da022ed6"
	I1210 05:33:14.447317   26512 cri.go:89] found id: "0a2be4003b1b30e3df7421633de714b1825d05f4ed06a10d8a16f03f12641dd3"
	I1210 05:33:14.447320   26512 cri.go:89] found id: "3b5e4f42b79e944fc79e354eea5dfaeef38e9a172b426d0cd69186d52604413a"
	I1210 05:33:14.447324   26512 cri.go:89] found id: "b2eb3db5b9910016ae4a73bcd8196a9aed9e2b0ea078772712f9b76865532a26"
	I1210 05:33:14.447329   26512 cri.go:89] found id: ""
	I1210 05:33:14.447372   26512 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 05:33:14.460194   26512 out.go:203] 
	W1210 05:33:14.461238   26512 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:33:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:33:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 05:33:14.461254   26512 out.go:285] * 
	* 
	W1210 05:33:14.464185   26512 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:33:14.465291   26512 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-193927 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-193927 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-193927 addons disable ingress --alsologtostderr -v=1: exit status 11 (235.972539ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:33:14.525935   26575 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:33:14.526109   26575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:33:14.526126   26575 out.go:374] Setting ErrFile to fd 2...
	I1210 05:33:14.526133   26575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:33:14.526408   26575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 05:33:14.526779   26575 mustload.go:66] Loading cluster: addons-193927
	I1210 05:33:14.527246   26575 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:33:14.527275   26575 addons.go:622] checking whether the cluster is paused
	I1210 05:33:14.527422   26575 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:33:14.527442   26575 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:33:14.527984   26575 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:33:14.545657   26575 ssh_runner.go:195] Run: systemctl --version
	I1210 05:33:14.545706   26575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:33:14.561358   26575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:33:14.655382   26575 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:33:14.655454   26575 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:33:14.682709   26575 cri.go:89] found id: "75e19d729e94f329191c7d28e09fd0b81b6416f227fbb3cdf06fb4b94ba3b0f4"
	I1210 05:33:14.682725   26575 cri.go:89] found id: "5d4a1d5da42cdea143fe7688a27cc37ad2f4a146e885ca2f25810e17c009c709"
	I1210 05:33:14.682729   26575 cri.go:89] found id: "cd1f99729cdad01237d94da575e9488a1f060060c59b7858ae362146b66a5f07"
	I1210 05:33:14.682732   26575 cri.go:89] found id: "95ca228da5e9f4d7e909834091b594c45c7208f8d3b2a571abd619c956f77482"
	I1210 05:33:14.682735   26575 cri.go:89] found id: "5b9cf05c0ab5e38bbe9cfe2273f65fd13711e93896f218f41f79c0660b03dc90"
	I1210 05:33:14.682739   26575 cri.go:89] found id: "6c8ffe9e271a1d0db9a74167b5f966efa0dfe72c5e1662e507abbd8e9663fab6"
	I1210 05:33:14.682741   26575 cri.go:89] found id: "42c477d9d74b6c98a8cb5af1e1f7e3db2b09e988a7fae2733bc43265b154797e"
	I1210 05:33:14.682744   26575 cri.go:89] found id: "dc84fc8b0d7ae154e64ef5052f253bba4217a7a5e867c4712f16ca97cf539e99"
	I1210 05:33:14.682746   26575 cri.go:89] found id: "217c6052689f8c587e315acc25a1b2849ce25e9b39451148233d1f6aa28f814e"
	I1210 05:33:14.682757   26575 cri.go:89] found id: "f02ac84563fd8d04d4258d15032b4be710d23f174fe6977d0c77a2b2231ceb66"
	I1210 05:33:14.682761   26575 cri.go:89] found id: "c2dd148c15de2f4ce8a5067f1648f58cbe34599d18b462157fbe53d635a2ae2d"
	I1210 05:33:14.682764   26575 cri.go:89] found id: "bd251ebea34ff80ac352d3659aca4e9dd92516b5b29e42918a88320e6d6c00a0"
	I1210 05:33:14.682767   26575 cri.go:89] found id: "976a8b19e2a981db8eb4cccab7c5e66c6de34da6ca5d67769e3041ff93464bb0"
	I1210 05:33:14.682770   26575 cri.go:89] found id: "7a65eea81e573477a1e4b111a57afc5d01badf2c22b3244ab34f401df736478b"
	I1210 05:33:14.682774   26575 cri.go:89] found id: "05be8bf506f18516a5e7ba92ec9ee9f1ddb3e678cbc2fbd6fa67ed3d79c01d6f"
	I1210 05:33:14.682783   26575 cri.go:89] found id: "c2e8fc6eb52c03a13e3410eba38a1f93510543ca9cc1f2dce8cf44f724ebb51e"
	I1210 05:33:14.682791   26575 cri.go:89] found id: "cf1e8860d68b3fed3b954f03825d2e52dc0a76a1d91f34d013990bee525f9ba1"
	I1210 05:33:14.682797   26575 cri.go:89] found id: "3db45466cabf41054b120f3c6070f1ec70a8b2841948afaab355e73b36c7f163"
	I1210 05:33:14.682802   26575 cri.go:89] found id: "a56c2752b1ef94dc626cb6a5ebe9da70da07ed988ba80bc8dbc476de7200232b"
	I1210 05:33:14.682806   26575 cri.go:89] found id: "367aea18176f031be6232fd30a314c767c7759fec05c5e3ffdaf569336ad6525"
	I1210 05:33:14.682813   26575 cri.go:89] found id: "206f9657e022657209d8593c82dd3d5694e511c41253aa91adaa9064170bed8c"
	I1210 05:33:14.682824   26575 cri.go:89] found id: "6501c9a3d5552292acb572b481eea754ae6f17f2913f63dc303d6291da022ed6"
	I1210 05:33:14.682831   26575 cri.go:89] found id: "0a2be4003b1b30e3df7421633de714b1825d05f4ed06a10d8a16f03f12641dd3"
	I1210 05:33:14.682838   26575 cri.go:89] found id: "3b5e4f42b79e944fc79e354eea5dfaeef38e9a172b426d0cd69186d52604413a"
	I1210 05:33:14.682843   26575 cri.go:89] found id: "b2eb3db5b9910016ae4a73bcd8196a9aed9e2b0ea078772712f9b76865532a26"
	I1210 05:33:14.682846   26575 cri.go:89] found id: ""
	I1210 05:33:14.682886   26575 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 05:33:14.695975   26575 out.go:203] 
	W1210 05:33:14.697205   26575 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:33:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:33:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 05:33:14.697225   26575 out.go:285] * 
	* 
	W1210 05:33:14.700104   26575 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:33:14.701182   26575 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-193927 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (144.47s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.24s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-b2r94" [c4e63ab9-f75b-4804-aaf6-f4f56c791115] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003581399s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-193927 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-193927 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (233.464578ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:30:59.539137   23631 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:30:59.539499   23631 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:30:59.539510   23631 out.go:374] Setting ErrFile to fd 2...
	I1210 05:30:59.539514   23631 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:30:59.539703   23631 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 05:30:59.539981   23631 mustload.go:66] Loading cluster: addons-193927
	I1210 05:30:59.540348   23631 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:30:59.540370   23631 addons.go:622] checking whether the cluster is paused
	I1210 05:30:59.540459   23631 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:30:59.540475   23631 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:30:59.540861   23631 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:30:59.557608   23631 ssh_runner.go:195] Run: systemctl --version
	I1210 05:30:59.557656   23631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:30:59.573350   23631 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:30:59.666048   23631 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:30:59.666130   23631 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:30:59.693617   23631 cri.go:89] found id: "5d4a1d5da42cdea143fe7688a27cc37ad2f4a146e885ca2f25810e17c009c709"
	I1210 05:30:59.693633   23631 cri.go:89] found id: "cd1f99729cdad01237d94da575e9488a1f060060c59b7858ae362146b66a5f07"
	I1210 05:30:59.693637   23631 cri.go:89] found id: "95ca228da5e9f4d7e909834091b594c45c7208f8d3b2a571abd619c956f77482"
	I1210 05:30:59.693640   23631 cri.go:89] found id: "5b9cf05c0ab5e38bbe9cfe2273f65fd13711e93896f218f41f79c0660b03dc90"
	I1210 05:30:59.693643   23631 cri.go:89] found id: "6c8ffe9e271a1d0db9a74167b5f966efa0dfe72c5e1662e507abbd8e9663fab6"
	I1210 05:30:59.693646   23631 cri.go:89] found id: "42c477d9d74b6c98a8cb5af1e1f7e3db2b09e988a7fae2733bc43265b154797e"
	I1210 05:30:59.693648   23631 cri.go:89] found id: "dc84fc8b0d7ae154e64ef5052f253bba4217a7a5e867c4712f16ca97cf539e99"
	I1210 05:30:59.693651   23631 cri.go:89] found id: "217c6052689f8c587e315acc25a1b2849ce25e9b39451148233d1f6aa28f814e"
	I1210 05:30:59.693653   23631 cri.go:89] found id: "f02ac84563fd8d04d4258d15032b4be710d23f174fe6977d0c77a2b2231ceb66"
	I1210 05:30:59.693659   23631 cri.go:89] found id: "c2dd148c15de2f4ce8a5067f1648f58cbe34599d18b462157fbe53d635a2ae2d"
	I1210 05:30:59.693662   23631 cri.go:89] found id: "bd251ebea34ff80ac352d3659aca4e9dd92516b5b29e42918a88320e6d6c00a0"
	I1210 05:30:59.693664   23631 cri.go:89] found id: "976a8b19e2a981db8eb4cccab7c5e66c6de34da6ca5d67769e3041ff93464bb0"
	I1210 05:30:59.693667   23631 cri.go:89] found id: "7a65eea81e573477a1e4b111a57afc5d01badf2c22b3244ab34f401df736478b"
	I1210 05:30:59.693669   23631 cri.go:89] found id: "05be8bf506f18516a5e7ba92ec9ee9f1ddb3e678cbc2fbd6fa67ed3d79c01d6f"
	I1210 05:30:59.693672   23631 cri.go:89] found id: "c2e8fc6eb52c03a13e3410eba38a1f93510543ca9cc1f2dce8cf44f724ebb51e"
	I1210 05:30:59.693679   23631 cri.go:89] found id: "cf1e8860d68b3fed3b954f03825d2e52dc0a76a1d91f34d013990bee525f9ba1"
	I1210 05:30:59.693686   23631 cri.go:89] found id: "3db45466cabf41054b120f3c6070f1ec70a8b2841948afaab355e73b36c7f163"
	I1210 05:30:59.693690   23631 cri.go:89] found id: "a56c2752b1ef94dc626cb6a5ebe9da70da07ed988ba80bc8dbc476de7200232b"
	I1210 05:30:59.693693   23631 cri.go:89] found id: "367aea18176f031be6232fd30a314c767c7759fec05c5e3ffdaf569336ad6525"
	I1210 05:30:59.693696   23631 cri.go:89] found id: "206f9657e022657209d8593c82dd3d5694e511c41253aa91adaa9064170bed8c"
	I1210 05:30:59.693698   23631 cri.go:89] found id: "6501c9a3d5552292acb572b481eea754ae6f17f2913f63dc303d6291da022ed6"
	I1210 05:30:59.693701   23631 cri.go:89] found id: "0a2be4003b1b30e3df7421633de714b1825d05f4ed06a10d8a16f03f12641dd3"
	I1210 05:30:59.693703   23631 cri.go:89] found id: "3b5e4f42b79e944fc79e354eea5dfaeef38e9a172b426d0cd69186d52604413a"
	I1210 05:30:59.693706   23631 cri.go:89] found id: "b2eb3db5b9910016ae4a73bcd8196a9aed9e2b0ea078772712f9b76865532a26"
	I1210 05:30:59.693709   23631 cri.go:89] found id: ""
	I1210 05:30:59.693740   23631 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 05:30:59.706808   23631 out.go:203] 
	W1210 05:30:59.707943   23631 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:30:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:30:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 05:30:59.707956   23631 out.go:285] * 
	* 
	W1210 05:30:59.710844   23631 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:30:59.711977   23631 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-193927 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.24s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 2.951183ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-xswrz" [c2b984ad-8af5-448a-8db7-1a2a5e4cff81] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003124509s
addons_test.go:465: (dbg) Run:  kubectl --context addons-193927 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-193927 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-193927 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (233.665882ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:30:45.268666   21478 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:30:45.268843   21478 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:30:45.268853   21478 out.go:374] Setting ErrFile to fd 2...
	I1210 05:30:45.268858   21478 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:30:45.269068   21478 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 05:30:45.269331   21478 mustload.go:66] Loading cluster: addons-193927
	I1210 05:30:45.269623   21478 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:30:45.269641   21478 addons.go:622] checking whether the cluster is paused
	I1210 05:30:45.269720   21478 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:30:45.269732   21478 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:30:45.270212   21478 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:30:45.288194   21478 ssh_runner.go:195] Run: systemctl --version
	I1210 05:30:45.288241   21478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:30:45.304265   21478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:30:45.397121   21478 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:30:45.397177   21478 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:30:45.426311   21478 cri.go:89] found id: "5d4a1d5da42cdea143fe7688a27cc37ad2f4a146e885ca2f25810e17c009c709"
	I1210 05:30:45.426335   21478 cri.go:89] found id: "cd1f99729cdad01237d94da575e9488a1f060060c59b7858ae362146b66a5f07"
	I1210 05:30:45.426342   21478 cri.go:89] found id: "95ca228da5e9f4d7e909834091b594c45c7208f8d3b2a571abd619c956f77482"
	I1210 05:30:45.426347   21478 cri.go:89] found id: "5b9cf05c0ab5e38bbe9cfe2273f65fd13711e93896f218f41f79c0660b03dc90"
	I1210 05:30:45.426352   21478 cri.go:89] found id: "6c8ffe9e271a1d0db9a74167b5f966efa0dfe72c5e1662e507abbd8e9663fab6"
	I1210 05:30:45.426367   21478 cri.go:89] found id: "42c477d9d74b6c98a8cb5af1e1f7e3db2b09e988a7fae2733bc43265b154797e"
	I1210 05:30:45.426372   21478 cri.go:89] found id: "dc84fc8b0d7ae154e64ef5052f253bba4217a7a5e867c4712f16ca97cf539e99"
	I1210 05:30:45.426376   21478 cri.go:89] found id: "217c6052689f8c587e315acc25a1b2849ce25e9b39451148233d1f6aa28f814e"
	I1210 05:30:45.426381   21478 cri.go:89] found id: "f02ac84563fd8d04d4258d15032b4be710d23f174fe6977d0c77a2b2231ceb66"
	I1210 05:30:45.426389   21478 cri.go:89] found id: "c2dd148c15de2f4ce8a5067f1648f58cbe34599d18b462157fbe53d635a2ae2d"
	I1210 05:30:45.426394   21478 cri.go:89] found id: "bd251ebea34ff80ac352d3659aca4e9dd92516b5b29e42918a88320e6d6c00a0"
	I1210 05:30:45.426399   21478 cri.go:89] found id: "976a8b19e2a981db8eb4cccab7c5e66c6de34da6ca5d67769e3041ff93464bb0"
	I1210 05:30:45.426403   21478 cri.go:89] found id: "7a65eea81e573477a1e4b111a57afc5d01badf2c22b3244ab34f401df736478b"
	I1210 05:30:45.426414   21478 cri.go:89] found id: "05be8bf506f18516a5e7ba92ec9ee9f1ddb3e678cbc2fbd6fa67ed3d79c01d6f"
	I1210 05:30:45.426418   21478 cri.go:89] found id: "c2e8fc6eb52c03a13e3410eba38a1f93510543ca9cc1f2dce8cf44f724ebb51e"
	I1210 05:30:45.426428   21478 cri.go:89] found id: "cf1e8860d68b3fed3b954f03825d2e52dc0a76a1d91f34d013990bee525f9ba1"
	I1210 05:30:45.426440   21478 cri.go:89] found id: "3db45466cabf41054b120f3c6070f1ec70a8b2841948afaab355e73b36c7f163"
	I1210 05:30:45.426446   21478 cri.go:89] found id: "a56c2752b1ef94dc626cb6a5ebe9da70da07ed988ba80bc8dbc476de7200232b"
	I1210 05:30:45.426451   21478 cri.go:89] found id: "367aea18176f031be6232fd30a314c767c7759fec05c5e3ffdaf569336ad6525"
	I1210 05:30:45.426455   21478 cri.go:89] found id: "206f9657e022657209d8593c82dd3d5694e511c41253aa91adaa9064170bed8c"
	I1210 05:30:45.426463   21478 cri.go:89] found id: "6501c9a3d5552292acb572b481eea754ae6f17f2913f63dc303d6291da022ed6"
	I1210 05:30:45.426467   21478 cri.go:89] found id: "0a2be4003b1b30e3df7421633de714b1825d05f4ed06a10d8a16f03f12641dd3"
	I1210 05:30:45.426471   21478 cri.go:89] found id: "3b5e4f42b79e944fc79e354eea5dfaeef38e9a172b426d0cd69186d52604413a"
	I1210 05:30:45.426476   21478 cri.go:89] found id: "b2eb3db5b9910016ae4a73bcd8196a9aed9e2b0ea078772712f9b76865532a26"
	I1210 05:30:45.426486   21478 cri.go:89] found id: ""
	I1210 05:30:45.426526   21478 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 05:30:45.441227   21478 out.go:203] 
	W1210 05:30:45.442367   21478 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:30:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:30:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 05:30:45.442381   21478 out.go:285] * 
	* 
	W1210 05:30:45.445183   21478 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:30:45.446200   21478 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-193927 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.30s)

                                                
                                    
x
+
TestAddons/parallel/CSI (37.78s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1210 05:30:42.976416    9253 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1210 05:30:42.981923    9253 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1210 05:30:42.981951    9253 kapi.go:107] duration metric: took 5.553679ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 5.565508ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-193927 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-193927 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-193927 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-193927 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-193927 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-193927 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-193927 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-193927 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-193927 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-193927 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-193927 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-193927 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-193927 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-193927 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-193927 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-193927 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [e0b39630-6483-4276-a0fb-d12fc3bd604f] Pending
helpers_test.go:353: "task-pv-pod" [e0b39630-6483-4276-a0fb-d12fc3bd604f] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 6.003622127s
addons_test.go:574: (dbg) Run:  kubectl --context addons-193927 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-193927 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-193927 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-193927 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-193927 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-193927 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-193927 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-193927 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-193927 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-193927 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-193927 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-193927 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-193927 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-193927 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-193927 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-193927 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-193927 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [01d2fe10-7a45-48f5-9121-c9ac98f9339c] Pending
helpers_test.go:353: "task-pv-pod-restore" [01d2fe10-7a45-48f5-9121-c9ac98f9339c] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.003736012s
addons_test.go:616: (dbg) Run:  kubectl --context addons-193927 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-193927 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-193927 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-193927 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-193927 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (235.100182ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:31:20.335749   24190 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:31:20.336031   24190 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:31:20.336041   24190 out.go:374] Setting ErrFile to fd 2...
	I1210 05:31:20.336045   24190 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:31:20.336237   24190 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 05:31:20.336457   24190 mustload.go:66] Loading cluster: addons-193927
	I1210 05:31:20.336745   24190 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:31:20.336764   24190 addons.go:622] checking whether the cluster is paused
	I1210 05:31:20.336843   24190 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:31:20.336854   24190 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:31:20.337219   24190 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:31:20.354444   24190 ssh_runner.go:195] Run: systemctl --version
	I1210 05:31:20.354494   24190 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:31:20.372378   24190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:31:20.466009   24190 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:31:20.466136   24190 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:31:20.493624   24190 cri.go:89] found id: "5d4a1d5da42cdea143fe7688a27cc37ad2f4a146e885ca2f25810e17c009c709"
	I1210 05:31:20.493639   24190 cri.go:89] found id: "cd1f99729cdad01237d94da575e9488a1f060060c59b7858ae362146b66a5f07"
	I1210 05:31:20.493643   24190 cri.go:89] found id: "95ca228da5e9f4d7e909834091b594c45c7208f8d3b2a571abd619c956f77482"
	I1210 05:31:20.493646   24190 cri.go:89] found id: "5b9cf05c0ab5e38bbe9cfe2273f65fd13711e93896f218f41f79c0660b03dc90"
	I1210 05:31:20.493649   24190 cri.go:89] found id: "6c8ffe9e271a1d0db9a74167b5f966efa0dfe72c5e1662e507abbd8e9663fab6"
	I1210 05:31:20.493652   24190 cri.go:89] found id: "42c477d9d74b6c98a8cb5af1e1f7e3db2b09e988a7fae2733bc43265b154797e"
	I1210 05:31:20.493654   24190 cri.go:89] found id: "dc84fc8b0d7ae154e64ef5052f253bba4217a7a5e867c4712f16ca97cf539e99"
	I1210 05:31:20.493657   24190 cri.go:89] found id: "217c6052689f8c587e315acc25a1b2849ce25e9b39451148233d1f6aa28f814e"
	I1210 05:31:20.493659   24190 cri.go:89] found id: "f02ac84563fd8d04d4258d15032b4be710d23f174fe6977d0c77a2b2231ceb66"
	I1210 05:31:20.493665   24190 cri.go:89] found id: "c2dd148c15de2f4ce8a5067f1648f58cbe34599d18b462157fbe53d635a2ae2d"
	I1210 05:31:20.493668   24190 cri.go:89] found id: "bd251ebea34ff80ac352d3659aca4e9dd92516b5b29e42918a88320e6d6c00a0"
	I1210 05:31:20.493671   24190 cri.go:89] found id: "976a8b19e2a981db8eb4cccab7c5e66c6de34da6ca5d67769e3041ff93464bb0"
	I1210 05:31:20.493674   24190 cri.go:89] found id: "7a65eea81e573477a1e4b111a57afc5d01badf2c22b3244ab34f401df736478b"
	I1210 05:31:20.493676   24190 cri.go:89] found id: "05be8bf506f18516a5e7ba92ec9ee9f1ddb3e678cbc2fbd6fa67ed3d79c01d6f"
	I1210 05:31:20.493679   24190 cri.go:89] found id: "c2e8fc6eb52c03a13e3410eba38a1f93510543ca9cc1f2dce8cf44f724ebb51e"
	I1210 05:31:20.493691   24190 cri.go:89] found id: "cf1e8860d68b3fed3b954f03825d2e52dc0a76a1d91f34d013990bee525f9ba1"
	I1210 05:31:20.493698   24190 cri.go:89] found id: "3db45466cabf41054b120f3c6070f1ec70a8b2841948afaab355e73b36c7f163"
	I1210 05:31:20.493703   24190 cri.go:89] found id: "a56c2752b1ef94dc626cb6a5ebe9da70da07ed988ba80bc8dbc476de7200232b"
	I1210 05:31:20.493706   24190 cri.go:89] found id: "367aea18176f031be6232fd30a314c767c7759fec05c5e3ffdaf569336ad6525"
	I1210 05:31:20.493709   24190 cri.go:89] found id: "206f9657e022657209d8593c82dd3d5694e511c41253aa91adaa9064170bed8c"
	I1210 05:31:20.493713   24190 cri.go:89] found id: "6501c9a3d5552292acb572b481eea754ae6f17f2913f63dc303d6291da022ed6"
	I1210 05:31:20.493718   24190 cri.go:89] found id: "0a2be4003b1b30e3df7421633de714b1825d05f4ed06a10d8a16f03f12641dd3"
	I1210 05:31:20.493721   24190 cri.go:89] found id: "3b5e4f42b79e944fc79e354eea5dfaeef38e9a172b426d0cd69186d52604413a"
	I1210 05:31:20.493724   24190 cri.go:89] found id: "b2eb3db5b9910016ae4a73bcd8196a9aed9e2b0ea078772712f9b76865532a26"
	I1210 05:31:20.493729   24190 cri.go:89] found id: ""
	I1210 05:31:20.493767   24190 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 05:31:20.508049   24190 out.go:203] 
	W1210 05:31:20.509286   24190 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:31:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:31:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 05:31:20.509304   24190 out.go:285] * 
	* 
	W1210 05:31:20.512199   24190 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:31:20.513491   24190 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-193927 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-193927 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-193927 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (231.91187ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:31:20.571048   24253 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:31:20.571319   24253 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:31:20.571328   24253 out.go:374] Setting ErrFile to fd 2...
	I1210 05:31:20.571332   24253 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:31:20.571509   24253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 05:31:20.571748   24253 mustload.go:66] Loading cluster: addons-193927
	I1210 05:31:20.572056   24253 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:31:20.572073   24253 addons.go:622] checking whether the cluster is paused
	I1210 05:31:20.572203   24253 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:31:20.572217   24253 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:31:20.572570   24253 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:31:20.591510   24253 ssh_runner.go:195] Run: systemctl --version
	I1210 05:31:20.591564   24253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:31:20.607701   24253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:31:20.700016   24253 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:31:20.700116   24253 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:31:20.727374   24253 cri.go:89] found id: "5d4a1d5da42cdea143fe7688a27cc37ad2f4a146e885ca2f25810e17c009c709"
	I1210 05:31:20.727391   24253 cri.go:89] found id: "cd1f99729cdad01237d94da575e9488a1f060060c59b7858ae362146b66a5f07"
	I1210 05:31:20.727405   24253 cri.go:89] found id: "95ca228da5e9f4d7e909834091b594c45c7208f8d3b2a571abd619c956f77482"
	I1210 05:31:20.727410   24253 cri.go:89] found id: "5b9cf05c0ab5e38bbe9cfe2273f65fd13711e93896f218f41f79c0660b03dc90"
	I1210 05:31:20.727414   24253 cri.go:89] found id: "6c8ffe9e271a1d0db9a74167b5f966efa0dfe72c5e1662e507abbd8e9663fab6"
	I1210 05:31:20.727418   24253 cri.go:89] found id: "42c477d9d74b6c98a8cb5af1e1f7e3db2b09e988a7fae2733bc43265b154797e"
	I1210 05:31:20.727423   24253 cri.go:89] found id: "dc84fc8b0d7ae154e64ef5052f253bba4217a7a5e867c4712f16ca97cf539e99"
	I1210 05:31:20.727427   24253 cri.go:89] found id: "217c6052689f8c587e315acc25a1b2849ce25e9b39451148233d1f6aa28f814e"
	I1210 05:31:20.727431   24253 cri.go:89] found id: "f02ac84563fd8d04d4258d15032b4be710d23f174fe6977d0c77a2b2231ceb66"
	I1210 05:31:20.727439   24253 cri.go:89] found id: "c2dd148c15de2f4ce8a5067f1648f58cbe34599d18b462157fbe53d635a2ae2d"
	I1210 05:31:20.727447   24253 cri.go:89] found id: "bd251ebea34ff80ac352d3659aca4e9dd92516b5b29e42918a88320e6d6c00a0"
	I1210 05:31:20.727453   24253 cri.go:89] found id: "976a8b19e2a981db8eb4cccab7c5e66c6de34da6ca5d67769e3041ff93464bb0"
	I1210 05:31:20.727462   24253 cri.go:89] found id: "7a65eea81e573477a1e4b111a57afc5d01badf2c22b3244ab34f401df736478b"
	I1210 05:31:20.727467   24253 cri.go:89] found id: "05be8bf506f18516a5e7ba92ec9ee9f1ddb3e678cbc2fbd6fa67ed3d79c01d6f"
	I1210 05:31:20.727475   24253 cri.go:89] found id: "c2e8fc6eb52c03a13e3410eba38a1f93510543ca9cc1f2dce8cf44f724ebb51e"
	I1210 05:31:20.727482   24253 cri.go:89] found id: "cf1e8860d68b3fed3b954f03825d2e52dc0a76a1d91f34d013990bee525f9ba1"
	I1210 05:31:20.727493   24253 cri.go:89] found id: "3db45466cabf41054b120f3c6070f1ec70a8b2841948afaab355e73b36c7f163"
	I1210 05:31:20.727498   24253 cri.go:89] found id: "a56c2752b1ef94dc626cb6a5ebe9da70da07ed988ba80bc8dbc476de7200232b"
	I1210 05:31:20.727501   24253 cri.go:89] found id: "367aea18176f031be6232fd30a314c767c7759fec05c5e3ffdaf569336ad6525"
	I1210 05:31:20.727506   24253 cri.go:89] found id: "206f9657e022657209d8593c82dd3d5694e511c41253aa91adaa9064170bed8c"
	I1210 05:31:20.727514   24253 cri.go:89] found id: "6501c9a3d5552292acb572b481eea754ae6f17f2913f63dc303d6291da022ed6"
	I1210 05:31:20.727519   24253 cri.go:89] found id: "0a2be4003b1b30e3df7421633de714b1825d05f4ed06a10d8a16f03f12641dd3"
	I1210 05:31:20.727527   24253 cri.go:89] found id: "3b5e4f42b79e944fc79e354eea5dfaeef38e9a172b426d0cd69186d52604413a"
	I1210 05:31:20.727532   24253 cri.go:89] found id: "b2eb3db5b9910016ae4a73bcd8196a9aed9e2b0ea078772712f9b76865532a26"
	I1210 05:31:20.727539   24253 cri.go:89] found id: ""
	I1210 05:31:20.727585   24253 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 05:31:20.740613   24253 out.go:203] 
	W1210 05:31:20.741640   24253 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:31:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:31:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 05:31:20.741658   24253 out.go:285] * 
	* 
	W1210 05:31:20.744574   24253 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:31:20.745582   24253 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-193927 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (37.78s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-193927 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-193927 --alsologtostderr -v=1: exit status 11 (238.342297ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:30:37.779949   20484 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:30:37.780132   20484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:30:37.780144   20484 out.go:374] Setting ErrFile to fd 2...
	I1210 05:30:37.780149   20484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:30:37.780400   20484 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 05:30:37.780713   20484 mustload.go:66] Loading cluster: addons-193927
	I1210 05:30:37.781092   20484 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:30:37.781116   20484 addons.go:622] checking whether the cluster is paused
	I1210 05:30:37.781236   20484 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:30:37.781251   20484 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:30:37.781859   20484 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:30:37.799111   20484 ssh_runner.go:195] Run: systemctl --version
	I1210 05:30:37.799163   20484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:30:37.815393   20484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:30:37.908121   20484 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:30:37.908190   20484 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:30:37.937201   20484 cri.go:89] found id: "5d4a1d5da42cdea143fe7688a27cc37ad2f4a146e885ca2f25810e17c009c709"
	I1210 05:30:37.937230   20484 cri.go:89] found id: "cd1f99729cdad01237d94da575e9488a1f060060c59b7858ae362146b66a5f07"
	I1210 05:30:37.937236   20484 cri.go:89] found id: "95ca228da5e9f4d7e909834091b594c45c7208f8d3b2a571abd619c956f77482"
	I1210 05:30:37.937241   20484 cri.go:89] found id: "5b9cf05c0ab5e38bbe9cfe2273f65fd13711e93896f218f41f79c0660b03dc90"
	I1210 05:30:37.937245   20484 cri.go:89] found id: "6c8ffe9e271a1d0db9a74167b5f966efa0dfe72c5e1662e507abbd8e9663fab6"
	I1210 05:30:37.937250   20484 cri.go:89] found id: "42c477d9d74b6c98a8cb5af1e1f7e3db2b09e988a7fae2733bc43265b154797e"
	I1210 05:30:37.937254   20484 cri.go:89] found id: "dc84fc8b0d7ae154e64ef5052f253bba4217a7a5e867c4712f16ca97cf539e99"
	I1210 05:30:37.937259   20484 cri.go:89] found id: "217c6052689f8c587e315acc25a1b2849ce25e9b39451148233d1f6aa28f814e"
	I1210 05:30:37.937263   20484 cri.go:89] found id: "f02ac84563fd8d04d4258d15032b4be710d23f174fe6977d0c77a2b2231ceb66"
	I1210 05:30:37.937279   20484 cri.go:89] found id: "c2dd148c15de2f4ce8a5067f1648f58cbe34599d18b462157fbe53d635a2ae2d"
	I1210 05:30:37.937288   20484 cri.go:89] found id: "bd251ebea34ff80ac352d3659aca4e9dd92516b5b29e42918a88320e6d6c00a0"
	I1210 05:30:37.937293   20484 cri.go:89] found id: "976a8b19e2a981db8eb4cccab7c5e66c6de34da6ca5d67769e3041ff93464bb0"
	I1210 05:30:37.937299   20484 cri.go:89] found id: "7a65eea81e573477a1e4b111a57afc5d01badf2c22b3244ab34f401df736478b"
	I1210 05:30:37.937305   20484 cri.go:89] found id: "05be8bf506f18516a5e7ba92ec9ee9f1ddb3e678cbc2fbd6fa67ed3d79c01d6f"
	I1210 05:30:37.937309   20484 cri.go:89] found id: "c2e8fc6eb52c03a13e3410eba38a1f93510543ca9cc1f2dce8cf44f724ebb51e"
	I1210 05:30:37.937321   20484 cri.go:89] found id: "cf1e8860d68b3fed3b954f03825d2e52dc0a76a1d91f34d013990bee525f9ba1"
	I1210 05:30:37.937329   20484 cri.go:89] found id: "3db45466cabf41054b120f3c6070f1ec70a8b2841948afaab355e73b36c7f163"
	I1210 05:30:37.937335   20484 cri.go:89] found id: "a56c2752b1ef94dc626cb6a5ebe9da70da07ed988ba80bc8dbc476de7200232b"
	I1210 05:30:37.937339   20484 cri.go:89] found id: "367aea18176f031be6232fd30a314c767c7759fec05c5e3ffdaf569336ad6525"
	I1210 05:30:37.937343   20484 cri.go:89] found id: "206f9657e022657209d8593c82dd3d5694e511c41253aa91adaa9064170bed8c"
	I1210 05:30:37.937351   20484 cri.go:89] found id: "6501c9a3d5552292acb572b481eea754ae6f17f2913f63dc303d6291da022ed6"
	I1210 05:30:37.937356   20484 cri.go:89] found id: "0a2be4003b1b30e3df7421633de714b1825d05f4ed06a10d8a16f03f12641dd3"
	I1210 05:30:37.937363   20484 cri.go:89] found id: "3b5e4f42b79e944fc79e354eea5dfaeef38e9a172b426d0cd69186d52604413a"
	I1210 05:30:37.937369   20484 cri.go:89] found id: "b2eb3db5b9910016ae4a73bcd8196a9aed9e2b0ea078772712f9b76865532a26"
	I1210 05:30:37.937376   20484 cri.go:89] found id: ""
	I1210 05:30:37.937429   20484 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 05:30:37.951325   20484 out.go:203] 
	W1210 05:30:37.952507   20484 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:30:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:30:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 05:30:37.952521   20484 out.go:285] * 
	* 
	W1210 05:30:37.955374   20484 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:30:37.956558   20484 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-193927 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-193927
helpers_test.go:244: (dbg) docker inspect addons-193927:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d9822419bc1196f4aa4320a2438080f0e8206aefc2d38ac282fc76185ca90e8d",
	        "Created": "2025-12-10T05:29:04.370422332Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 11700,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T05:29:04.401985856Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/d9822419bc1196f4aa4320a2438080f0e8206aefc2d38ac282fc76185ca90e8d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d9822419bc1196f4aa4320a2438080f0e8206aefc2d38ac282fc76185ca90e8d/hostname",
	        "HostsPath": "/var/lib/docker/containers/d9822419bc1196f4aa4320a2438080f0e8206aefc2d38ac282fc76185ca90e8d/hosts",
	        "LogPath": "/var/lib/docker/containers/d9822419bc1196f4aa4320a2438080f0e8206aefc2d38ac282fc76185ca90e8d/d9822419bc1196f4aa4320a2438080f0e8206aefc2d38ac282fc76185ca90e8d-json.log",
	        "Name": "/addons-193927",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-193927:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-193927",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d9822419bc1196f4aa4320a2438080f0e8206aefc2d38ac282fc76185ca90e8d",
	                "LowerDir": "/var/lib/docker/overlay2/ea6ebd6640cac5aa52f1c85b843c3940c4cf37feae8399570705f14c1d15272c-init/diff:/var/lib/docker/overlay2/b62e2f8db4877fd6b32453256d2aeab173581bfdfbed6c87a5c3b6dd49dbb983/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ea6ebd6640cac5aa52f1c85b843c3940c4cf37feae8399570705f14c1d15272c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ea6ebd6640cac5aa52f1c85b843c3940c4cf37feae8399570705f14c1d15272c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ea6ebd6640cac5aa52f1c85b843c3940c4cf37feae8399570705f14c1d15272c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-193927",
	                "Source": "/var/lib/docker/volumes/addons-193927/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-193927",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-193927",
	                "name.minikube.sigs.k8s.io": "addons-193927",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "85703a43c0c5a932407537da90729dd6048aa9a745c1e0574e64f661747b9863",
	            "SandboxKey": "/var/run/docker/netns/85703a43c0c5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-193927": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d72278174f2f83a56b57bff0dfa7876641b8e88aefe937e9b34b3af1750bdc5d",
	                    "EndpointID": "209ba600f17d85ad1770fffe769fe7b8c00c26e435203a836cf6af1fc41934d1",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "e2:0c:b9:b3:f5:4c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-193927",
	                        "d9822419bc11"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-193927 -n addons-193927
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-193927 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-193927 logs -n 25: (1.072475719s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-967603 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-967603   │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ delete  │ -p download-only-967603                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-967603   │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ start   │ -o=json --download-only -p download-only-307099 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-307099   │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ delete  │ -p download-only-307099                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-307099   │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ start   │ -o=json --download-only -p download-only-967320 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                           │ download-only-967320   │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ delete  │ -p download-only-967320                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-967320   │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ delete  │ -p download-only-967603                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-967603   │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ delete  │ -p download-only-307099                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-307099   │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ delete  │ -p download-only-967320                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-967320   │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ start   │ --download-only -p download-docker-192359 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-192359 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │                     │
	│ delete  │ -p download-docker-192359                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-192359 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ start   │ --download-only -p binary-mirror-899655 --alsologtostderr --binary-mirror http://127.0.0.1:36067 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-899655   │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │                     │
	│ delete  │ -p binary-mirror-899655                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-899655   │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ addons  │ disable dashboard -p addons-193927                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-193927          │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │                     │
	│ addons  │ enable dashboard -p addons-193927                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-193927          │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │                     │
	│ start   │ -p addons-193927 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-193927          │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:30 UTC │
	│ addons  │ addons-193927 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-193927          │ jenkins │ v1.37.0 │ 10 Dec 25 05:30 UTC │                     │
	│ addons  │ addons-193927 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-193927          │ jenkins │ v1.37.0 │ 10 Dec 25 05:30 UTC │                     │
	│ addons  │ enable headlamp -p addons-193927 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-193927          │ jenkins │ v1.37.0 │ 10 Dec 25 05:30 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:28:46
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:28:46.275231   11129 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:28:46.275342   11129 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:28:46.275351   11129 out.go:374] Setting ErrFile to fd 2...
	I1210 05:28:46.275354   11129 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:28:46.275533   11129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 05:28:46.276036   11129 out.go:368] Setting JSON to false
	I1210 05:28:46.276798   11129 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":670,"bootTime":1765343856,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:28:46.276879   11129 start.go:143] virtualization: kvm guest
	I1210 05:28:46.278506   11129 out.go:179] * [addons-193927] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 05:28:46.279476   11129 notify.go:221] Checking for updates...
	I1210 05:28:46.279489   11129 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 05:28:46.280427   11129 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:28:46.281443   11129 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 05:28:46.282478   11129 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube
	I1210 05:28:46.283379   11129 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 05:28:46.284217   11129 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:28:46.285244   11129 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:28:46.308689   11129 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 05:28:46.308781   11129 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:28:46.362767   11129 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-10 05:28:46.353762855 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 05:28:46.362878   11129 docker.go:319] overlay module found
	I1210 05:28:46.364328   11129 out.go:179] * Using the docker driver based on user configuration
	I1210 05:28:46.365291   11129 start.go:309] selected driver: docker
	I1210 05:28:46.365305   11129 start.go:927] validating driver "docker" against <nil>
	I1210 05:28:46.365315   11129 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:28:46.365837   11129 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:28:46.417328   11129 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-10 05:28:46.407268077 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 05:28:46.417526   11129 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 05:28:46.417752   11129 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 05:28:46.419169   11129 out.go:179] * Using Docker driver with root privileges
	I1210 05:28:46.420109   11129 cni.go:84] Creating CNI manager for ""
	I1210 05:28:46.420178   11129 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 05:28:46.420192   11129 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 05:28:46.420270   11129 start.go:353] cluster config:
	{Name:addons-193927 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-193927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1210 05:28:46.421319   11129 out.go:179] * Starting "addons-193927" primary control-plane node in "addons-193927" cluster
	I1210 05:28:46.422150   11129 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 05:28:46.423103   11129 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 05:28:46.423951   11129 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 05:28:46.423981   11129 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 05:28:46.439627   11129 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1210 05:28:46.439725   11129 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1210 05:28:46.439753   11129 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory, skipping pull
	I1210 05:28:46.439762   11129 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in cache, skipping pull
	I1210 05:28:46.439772   11129 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f as a tarball
	I1210 05:28:46.439782   11129 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from local cache
	W1210 05:28:46.448206   11129 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1210 05:28:46.533008   11129 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1210 05:28:46.533271   11129 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:28:46.533398   11129 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/config.json ...
	I1210 05:28:46.533426   11129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/config.json: {Name:mk15220b80d6396ef85d3cd2c5fbeb1c706f7513 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:28:46.662887   11129 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:28:46.791462   11129 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:28:46.928008   11129 cache.go:107] acquiring lock: {Name:mk796942baeaa838a47daad2be5ca7532234da42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:46.928034   11129 cache.go:107] acquiring lock: {Name:mkdd768341d1a3481ecaec697219b32d4a715834 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:46.928040   11129 cache.go:107] acquiring lock: {Name:mkcb073544c2d92de0e0765e38c37b4f4d2ac46b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:46.928045   11129 cache.go:107] acquiring lock: {Name:mk4839690ba979036496a7cee1de2814aaad3bf0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:46.927996   11129 cache.go:107] acquiring lock: {Name:mk0763a50664c56b0862900e71862307cba94d4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:46.928004   11129 cache.go:107] acquiring lock: {Name:mkc3a95f67321b2fa8faeb966829fb60cf65d25d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:46.928134   11129 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1210 05:28:46.928153   11129 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1210 05:28:46.928165   11129 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 exists
	I1210 05:28:46.928168   11129 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 166.131µs
	I1210 05:28:46.928182   11129 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1210 05:28:46.928139   11129 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 05:28:46.928183   11129 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3" took 143.143µs
	I1210 05:28:46.928195   11129 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 succeeded
	I1210 05:28:46.928195   11129 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 211.169µs
	I1210 05:28:46.928162   11129 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 131.166µs
	I1210 05:28:46.928203   11129 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 05:28:46.928205   11129 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1210 05:28:46.928139   11129 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 exists
	I1210 05:28:46.928216   11129 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3" took 173.939µs
	I1210 05:28:46.928228   11129 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 succeeded
	I1210 05:28:46.928188   11129 cache.go:107] acquiring lock: {Name:mk4d792f4bac33dc8779d7cc5ff40393c94e0ea4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:46.928207   11129 cache.go:107] acquiring lock: {Name:mkd670cede0997c7eb0e9bd388a82e1cb2741031 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:46.928261   11129 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 exists
	I1210 05:28:46.928282   11129 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3" took 291.88µs
	I1210 05:28:46.928297   11129 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 succeeded
	I1210 05:28:46.928310   11129 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 exists
	I1210 05:28:46.928333   11129 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3" took 223.365µs
	I1210 05:28:46.928350   11129 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 succeeded
	I1210 05:28:46.928317   11129 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1210 05:28:46.928369   11129 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 229.006µs
	I1210 05:28:46.928379   11129 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 05:28:46.928386   11129 cache.go:87] Successfully saved all images to host disk.
	I1210 05:29:00.009752   11129 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from cached tarball
	I1210 05:29:00.009791   11129 cache.go:243] Successfully downloaded all kic artifacts
	I1210 05:29:00.009842   11129 start.go:360] acquireMachinesLock for addons-193927: {Name:mk44c4bc22782f28a1ec2fd1a231e15d9422e280 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:29:00.009950   11129 start.go:364] duration metric: took 86.083µs to acquireMachinesLock for "addons-193927"
	I1210 05:29:00.009981   11129 start.go:93] Provisioning new machine with config: &{Name:addons-193927 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-193927 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 05:29:00.010055   11129 start.go:125] createHost starting for "" (driver="docker")
	I1210 05:29:00.161578   11129 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1210 05:29:00.161863   11129 start.go:159] libmachine.API.Create for "addons-193927" (driver="docker")
	I1210 05:29:00.161892   11129 client.go:173] LocalClient.Create starting
	I1210 05:29:00.162031   11129 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem
	I1210 05:29:00.225801   11129 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem
	I1210 05:29:00.288616   11129 cli_runner.go:164] Run: docker network inspect addons-193927 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 05:29:00.305731   11129 cli_runner.go:211] docker network inspect addons-193927 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 05:29:00.305797   11129 network_create.go:284] running [docker network inspect addons-193927] to gather additional debugging logs...
	I1210 05:29:00.305816   11129 cli_runner.go:164] Run: docker network inspect addons-193927
	W1210 05:29:00.321253   11129 cli_runner.go:211] docker network inspect addons-193927 returned with exit code 1
	I1210 05:29:00.321278   11129 network_create.go:287] error running [docker network inspect addons-193927]: docker network inspect addons-193927: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-193927 not found
	I1210 05:29:00.321290   11129 network_create.go:289] output of [docker network inspect addons-193927]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-193927 not found
	
	** /stderr **
	I1210 05:29:00.321392   11129 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 05:29:00.337561   11129 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fb2a80}
	I1210 05:29:00.337602   11129 network_create.go:124] attempt to create docker network addons-193927 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1210 05:29:00.337657   11129 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-193927 addons-193927
	I1210 05:29:00.628488   11129 network_create.go:108] docker network addons-193927 192.168.49.0/24 created
	I1210 05:29:00.628519   11129 kic.go:121] calculated static IP "192.168.49.2" for the "addons-193927" container
	I1210 05:29:00.628574   11129 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 05:29:00.643678   11129 cli_runner.go:164] Run: docker volume create addons-193927 --label name.minikube.sigs.k8s.io=addons-193927 --label created_by.minikube.sigs.k8s.io=true
	I1210 05:29:00.698887   11129 oci.go:103] Successfully created a docker volume addons-193927
	I1210 05:29:00.698962   11129 cli_runner.go:164] Run: docker run --rm --name addons-193927-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-193927 --entrypoint /usr/bin/test -v addons-193927:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 05:29:04.298271   11129 cli_runner.go:217] Completed: docker run --rm --name addons-193927-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-193927 --entrypoint /usr/bin/test -v addons-193927:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (3.599273799s)
	I1210 05:29:04.298306   11129 oci.go:107] Successfully prepared a docker volume addons-193927
	I1210 05:29:04.298353   11129 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	W1210 05:29:04.298430   11129 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1210 05:29:04.298461   11129 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1210 05:29:04.298500   11129 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 05:29:04.353897   11129 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-193927 --name addons-193927 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-193927 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-193927 --network addons-193927 --ip 192.168.49.2 --volume addons-193927:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 05:29:04.623551   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Running}}
	I1210 05:29:04.642270   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:04.660724   11129 cli_runner.go:164] Run: docker exec addons-193927 stat /var/lib/dpkg/alternatives/iptables
	I1210 05:29:04.705803   11129 oci.go:144] the created container "addons-193927" has a running status.
	I1210 05:29:04.705835   11129 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa...
	I1210 05:29:04.744406   11129 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 05:29:04.771674   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:04.788869   11129 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 05:29:04.788887   11129 kic_runner.go:114] Args: [docker exec --privileged addons-193927 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 05:29:04.826184   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:04.843653   11129 machine.go:94] provisionDockerMachine start ...
	I1210 05:29:04.843756   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:04.863159   11129 main.go:143] libmachine: Using SSH client type: native
	I1210 05:29:04.863505   11129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1210 05:29:04.863525   11129 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 05:29:04.864985   11129 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54540->127.0.0.1:32768: read: connection reset by peer
	I1210 05:29:07.994255   11129 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-193927
	
	I1210 05:29:07.994284   11129 ubuntu.go:182] provisioning hostname "addons-193927"
	I1210 05:29:07.994353   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:08.010506   11129 main.go:143] libmachine: Using SSH client type: native
	I1210 05:29:08.010699   11129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1210 05:29:08.010711   11129 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-193927 && echo "addons-193927" | sudo tee /etc/hostname
	I1210 05:29:08.146563   11129 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-193927
	
	I1210 05:29:08.146635   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:08.162745   11129 main.go:143] libmachine: Using SSH client type: native
	I1210 05:29:08.162945   11129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1210 05:29:08.162960   11129 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-193927' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-193927/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-193927' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 05:29:08.289777   11129 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 05:29:08.289800   11129 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-5725/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-5725/.minikube}
	I1210 05:29:08.289820   11129 ubuntu.go:190] setting up certificates
	I1210 05:29:08.289831   11129 provision.go:84] configureAuth start
	I1210 05:29:08.289876   11129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-193927
	I1210 05:29:08.305988   11129 provision.go:143] copyHostCerts
	I1210 05:29:08.306056   11129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem (1679 bytes)
	I1210 05:29:08.306201   11129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem (1078 bytes)
	I1210 05:29:08.306277   11129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem (1123 bytes)
	I1210 05:29:08.306339   11129 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem org=jenkins.addons-193927 san=[127.0.0.1 192.168.49.2 addons-193927 localhost minikube]
	I1210 05:29:08.549873   11129 provision.go:177] copyRemoteCerts
	I1210 05:29:08.549921   11129 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 05:29:08.549955   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:08.566278   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:08.659091   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 05:29:08.676250   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 05:29:08.691401   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 05:29:08.706527   11129 provision.go:87] duration metric: took 416.685244ms to configureAuth
	I1210 05:29:08.706547   11129 ubuntu.go:206] setting minikube options for container-runtime
	I1210 05:29:08.706690   11129 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:29:08.706772   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:08.723018   11129 main.go:143] libmachine: Using SSH client type: native
	I1210 05:29:08.723244   11129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1210 05:29:08.723260   11129 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 05:29:08.980428   11129 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 05:29:08.980457   11129 machine.go:97] duration metric: took 4.136780163s to provisionDockerMachine
	I1210 05:29:08.980471   11129 client.go:176] duration metric: took 8.81857247s to LocalClient.Create
	I1210 05:29:08.980501   11129 start.go:167] duration metric: took 8.818636186s to libmachine.API.Create "addons-193927"
	I1210 05:29:08.980512   11129 start.go:293] postStartSetup for "addons-193927" (driver="docker")
	I1210 05:29:08.980530   11129 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 05:29:08.980620   11129 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 05:29:08.980669   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:08.997522   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:09.091671   11129 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 05:29:09.094738   11129 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 05:29:09.094760   11129 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 05:29:09.094769   11129 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/addons for local assets ...
	I1210 05:29:09.094827   11129 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/files for local assets ...
	I1210 05:29:09.094854   11129 start.go:296] duration metric: took 114.330928ms for postStartSetup
	I1210 05:29:09.095156   11129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-193927
	I1210 05:29:09.111186   11129 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/config.json ...
	I1210 05:29:09.111422   11129 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 05:29:09.111464   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:09.126832   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:09.217267   11129 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 05:29:09.221244   11129 start.go:128] duration metric: took 9.211177559s to createHost
	I1210 05:29:09.221261   11129 start.go:83] releasing machines lock for "addons-193927", held for 9.211297358s
	I1210 05:29:09.221319   11129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-193927
	I1210 05:29:09.237847   11129 ssh_runner.go:195] Run: cat /version.json
	I1210 05:29:09.237885   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:09.237944   11129 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 05:29:09.238021   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:09.254879   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:09.255073   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:09.399543   11129 ssh_runner.go:195] Run: systemctl --version
	I1210 05:29:09.405292   11129 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 05:29:09.434774   11129 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 05:29:09.438732   11129 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 05:29:09.438781   11129 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 05:29:09.461898   11129 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 05:29:09.461912   11129 start.go:496] detecting cgroup driver to use...
	I1210 05:29:09.461936   11129 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 05:29:09.461967   11129 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 05:29:09.476242   11129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 05:29:09.486646   11129 docker.go:218] disabling cri-docker service (if available) ...
	I1210 05:29:09.486688   11129 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 05:29:09.500959   11129 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 05:29:09.515972   11129 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 05:29:09.592464   11129 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 05:29:09.673555   11129 docker.go:234] disabling docker service ...
	I1210 05:29:09.673608   11129 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 05:29:09.689774   11129 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 05:29:09.700819   11129 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 05:29:09.778913   11129 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 05:29:09.854864   11129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 05:29:09.865758   11129 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 05:29:09.878150   11129 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:29:10.003553   11129 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 05:29:10.003613   11129 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:29:10.013761   11129 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 05:29:10.013816   11129 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:29:10.022038   11129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:29:10.030054   11129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:29:10.037707   11129 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 05:29:10.044862   11129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:29:10.052412   11129 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:29:10.064248   11129 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:29:10.072049   11129 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 05:29:10.078307   11129 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 05:29:10.078342   11129 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 05:29:10.089168   11129 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 05:29:10.095636   11129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:29:10.171488   11129 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 05:29:10.294427   11129 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 05:29:10.294503   11129 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 05:29:10.297976   11129 start.go:564] Will wait 60s for crictl version
	I1210 05:29:10.298027   11129 ssh_runner.go:195] Run: which crictl
	I1210 05:29:10.301253   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 05:29:10.324261   11129 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 05:29:10.324360   11129 ssh_runner.go:195] Run: crio --version
	I1210 05:29:10.350430   11129 ssh_runner.go:195] Run: crio --version
	I1210 05:29:10.378598   11129 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1210 05:29:10.379733   11129 cli_runner.go:164] Run: docker network inspect addons-193927 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 05:29:10.395487   11129 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 05:29:10.399157   11129 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 05:29:10.408552   11129 kubeadm.go:884] updating cluster {Name:addons-193927 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-193927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 05:29:10.408714   11129 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:29:10.536279   11129 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:29:10.659025   11129 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:29:10.789208   11129 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 05:29:10.789269   11129 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 05:29:10.811164   11129 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.3". assuming images are not preloaded.
	I1210 05:29:10.811187   11129 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.3 registry.k8s.io/kube-controller-manager:v1.34.3 registry.k8s.io/kube-scheduler:v1.34.3 registry.k8s.io/kube-proxy:v1.34.3 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 05:29:10.811293   11129 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:10.811313   11129 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:10.811328   11129 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 05:29:10.811337   11129 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:10.811360   11129 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:10.811259   11129 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:10.811249   11129 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:10.811271   11129 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:10.812455   11129 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:10.812455   11129 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:10.812496   11129 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 05:29:10.812529   11129 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:10.812459   11129 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:10.812456   11129 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:10.812465   11129 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:10.812859   11129 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:10.967332   11129 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:10.974231   11129 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:10.977609   11129 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:10.981393   11129 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:10.991101   11129 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:11.000681   11129 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1210 05:29:11.000720   11129 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:11.000761   11129 ssh_runner.go:195] Run: which crictl
	I1210 05:29:11.002070   11129 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:11.009455   11129 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1210 05:29:11.009501   11129 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:11.009550   11129 ssh_runner.go:195] Run: which crictl
	I1210 05:29:11.011013   11129 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1210 05:29:11.018311   11129 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.3" does not exist at hash "aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c" in container runtime
	I1210 05:29:11.018361   11129 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:11.018422   11129 ssh_runner.go:195] Run: which crictl
	I1210 05:29:11.019722   11129 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.3" does not exist at hash "aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78" in container runtime
	I1210 05:29:11.019768   11129 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:11.019825   11129 ssh_runner.go:195] Run: which crictl
	I1210 05:29:11.031449   11129 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.3" does not exist at hash "5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942" in container runtime
	I1210 05:29:11.031474   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:11.031489   11129 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:11.031533   11129 ssh_runner.go:195] Run: which crictl
	I1210 05:29:11.039342   11129 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.3" needs transfer: "registry.k8s.io/kube-proxy:v1.34.3" does not exist at hash "36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691" in container runtime
	I1210 05:29:11.039377   11129 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:11.039376   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:11.039417   11129 ssh_runner.go:195] Run: which crictl
	I1210 05:29:11.046905   11129 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1210 05:29:11.046927   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:11.046942   11129 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1210 05:29:11.046977   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:11.046980   11129 ssh_runner.go:195] Run: which crictl
	I1210 05:29:11.060502   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:11.060520   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:11.067968   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:11.068004   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:11.075238   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:11.075347   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 05:29:11.077387   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:11.096748   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:11.096748   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:11.100069   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:11.100100   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:11.111659   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 05:29:11.111763   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:11.113783   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:11.131445   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:11.131466   11129 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1210 05:29:11.131554   11129 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1210 05:29:11.134781   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:11.134843   11129 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0
	I1210 05:29:11.134920   11129 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1210 05:29:11.141276   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 05:29:11.147599   11129 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3
	I1210 05:29:11.147614   11129 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3
	I1210 05:29:11.147692   11129 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 05:29:11.147706   11129 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 05:29:11.160289   11129 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3
	I1210 05:29:11.160347   11129 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1210 05:29:11.160377   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1210 05:29:11.160391   11129 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 05:29:11.171550   11129 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1210 05:29:11.171586   11129 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3
	I1210 05:29:11.171671   11129 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 05:29:11.171585   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1210 05:29:11.176025   11129 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.3': No such file or directory
	I1210 05:29:11.176053   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 --> /var/lib/minikube/images/kube-scheduler_v1.34.3 (17393664 bytes)
	I1210 05:29:11.176147   11129 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.3': No such file or directory
	I1210 05:29:11.176178   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 --> /var/lib/minikube/images/kube-apiserver_v1.34.3 (27075584 bytes)
	I1210 05:29:11.176183   11129 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.3': No such file or directory
	I1210 05:29:11.176158   11129 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1210 05:29:11.176207   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 --> /var/lib/minikube/images/kube-controller-manager_v1.34.3 (22830080 bytes)
	I1210 05:29:11.176259   11129 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 05:29:11.223615   11129 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.3': No such file or directory
	I1210 05:29:11.223640   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 --> /var/lib/minikube/images/kube-proxy_v1.34.3 (25966592 bytes)
	I1210 05:29:11.225866   11129 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 05:29:11.225891   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1210 05:29:11.228053   11129 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:11.286037   11129 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 05:29:11.286113   11129 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1210 05:29:11.312547   11129 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1210 05:29:11.312592   11129 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:11.312642   11129 ssh_runner.go:195] Run: which crictl
	I1210 05:29:11.669880   11129 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1210 05:29:11.669918   11129 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1210 05:29:11.669955   11129 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1210 05:29:11.669966   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:12.884833   11129 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.214855855s)
	I1210 05:29:12.884857   11129 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1210 05:29:12.884876   11129 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 05:29:12.884878   11129 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.214889793s)
	I1210 05:29:12.884910   11129 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 05:29:12.884945   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:13.874231   11129 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 from cache
	I1210 05:29:13.874238   11129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:13.874268   11129 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1210 05:29:13.874303   11129 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1210 05:29:13.901143   11129 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 05:29:13.901236   11129 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 05:29:15.073710   11129 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.172450757s)
	I1210 05:29:15.073744   11129 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 05:29:15.073762   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1210 05:29:15.073712   11129 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (1.199376609s)
	I1210 05:29:15.073839   11129 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1210 05:29:15.073869   11129 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 05:29:15.073923   11129 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 05:29:16.369016   11129 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.3: (1.295070644s)
	I1210 05:29:16.369040   11129 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 from cache
	I1210 05:29:16.369060   11129 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 05:29:16.369133   11129 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 05:29:17.459526   11129 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.3: (1.090365034s)
	I1210 05:29:17.459558   11129 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 from cache
	I1210 05:29:17.459587   11129 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 05:29:17.459631   11129 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 05:29:18.475563   11129 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.3: (1.015910767s)
	I1210 05:29:18.475588   11129 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 from cache
	I1210 05:29:18.475615   11129 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 05:29:18.475657   11129 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1210 05:29:18.974434   11129 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 05:29:18.974481   11129 cache_images.go:125] Successfully loaded all cached images
	I1210 05:29:18.974488   11129 cache_images.go:94] duration metric: took 8.163287278s to LoadCachedImages
	I1210 05:29:18.974503   11129 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.3 crio true true} ...
	I1210 05:29:18.974592   11129 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-193927 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:addons-193927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 05:29:18.974698   11129 ssh_runner.go:195] Run: crio config
	I1210 05:29:19.018535   11129 cni.go:84] Creating CNI manager for ""
	I1210 05:29:19.018554   11129 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 05:29:19.018571   11129 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 05:29:19.018595   11129 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-193927 NodeName:addons-193927 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 05:29:19.018706   11129 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-193927"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 05:29:19.018768   11129 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1210 05:29:19.026566   11129 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.3': No such file or directory
	
	Initiating transfer...
	I1210 05:29:19.026618   11129 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.3
	I1210 05:29:19.034020   11129 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl.sha256
	I1210 05:29:19.034050   11129 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubelet.sha256
	I1210 05:29:19.034046   11129 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:29:19.034115   11129 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubectl
	I1210 05:29:19.034124   11129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 05:29:19.034167   11129 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubeadm
	I1210 05:29:19.037710   11129 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubectl': No such file or directory
	I1210 05:29:19.037733   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/linux/amd64/v1.34.3/kubectl --> /var/lib/minikube/binaries/v1.34.3/kubectl (60563640 bytes)
	I1210 05:29:19.038298   11129 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubeadm': No such file or directory
	I1210 05:29:19.038321   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/linux/amd64/v1.34.3/kubeadm --> /var/lib/minikube/binaries/v1.34.3/kubeadm (74027192 bytes)
	I1210 05:29:19.053892   11129 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubelet
	I1210 05:29:19.090636   11129 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubelet': No such file or directory
	I1210 05:29:19.090671   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/linux/amd64/v1.34.3/kubelet --> /var/lib/minikube/binaries/v1.34.3/kubelet (59203876 bytes)
	I1210 05:29:19.505163   11129 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 05:29:19.512360   11129 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1210 05:29:19.523634   11129 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 05:29:19.537516   11129 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1210 05:29:19.548610   11129 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 05:29:19.551617   11129 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 05:29:19.560340   11129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:29:19.635784   11129 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:29:19.659025   11129 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927 for IP: 192.168.49.2
	I1210 05:29:19.659044   11129 certs.go:195] generating shared ca certs ...
	I1210 05:29:19.659063   11129 certs.go:227] acquiring lock for ca certs: {Name:mka90f54d579d39a8508aa46a6cef002ccad5d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:19.659244   11129 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key
	I1210 05:29:19.793147   11129 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt ...
	I1210 05:29:19.793181   11129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt: {Name:mkc0d0f92e95d60b30ec1dbf56195b2dda84cffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:19.793350   11129 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key ...
	I1210 05:29:19.793366   11129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key: {Name:mkcb24c7e12076b8d17133f829204e050e518554 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:19.793470   11129 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key
	I1210 05:29:19.822657   11129 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.crt ...
	I1210 05:29:19.822678   11129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.crt: {Name:mkc9fb3c2bc5b72aa1ea9c45f23c0f33021a2b12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:19.822824   11129 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key ...
	I1210 05:29:19.822838   11129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key: {Name:mkb04095fb63c55d15225717a2eee3c7c5e76061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:19.822927   11129 certs.go:257] generating profile certs ...
	I1210 05:29:19.822997   11129 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.key
	I1210 05:29:19.823015   11129 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.crt with IP's: []
	I1210 05:29:19.867323   11129 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.crt ...
	I1210 05:29:19.867341   11129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.crt: {Name:mk01319d2752e614055082ddab1c9e855df1f14e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:19.867470   11129 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.key ...
	I1210 05:29:19.867483   11129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.key: {Name:mk35a3912ad5f367c88e2a7048f8fec25874ffac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:19.867574   11129 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/apiserver.key.7daa7a3e
	I1210 05:29:19.867596   11129 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/apiserver.crt.7daa7a3e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1210 05:29:19.975099   11129 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/apiserver.crt.7daa7a3e ...
	I1210 05:29:19.975122   11129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/apiserver.crt.7daa7a3e: {Name:mkbffd692b7f8649db24e2e6cd07451c5634743b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:19.975243   11129 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/apiserver.key.7daa7a3e ...
	I1210 05:29:19.975256   11129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/apiserver.key.7daa7a3e: {Name:mk0f10262d3b363611bb28322382111b525ec8f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:19.975321   11129 certs.go:382] copying /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/apiserver.crt.7daa7a3e -> /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/apiserver.crt
	I1210 05:29:19.975391   11129 certs.go:386] copying /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/apiserver.key.7daa7a3e -> /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/apiserver.key
	I1210 05:29:19.975437   11129 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/proxy-client.key
	I1210 05:29:19.975453   11129 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/proxy-client.crt with IP's: []
	I1210 05:29:20.177289   11129 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/proxy-client.crt ...
	I1210 05:29:20.177312   11129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/proxy-client.crt: {Name:mk3b440df824a859a3d6377a95acc4bb2c2ea5fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:20.177474   11129 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/proxy-client.key ...
	I1210 05:29:20.177485   11129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/proxy-client.key: {Name:mkf17594af389a3170388dd608c101b0e689cce9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:20.177664   11129 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 05:29:20.177705   11129 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem (1078 bytes)
	I1210 05:29:20.177737   11129 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem (1123 bytes)
	I1210 05:29:20.177761   11129 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem (1679 bytes)
	I1210 05:29:20.178355   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 05:29:20.194906   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 05:29:20.210550   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 05:29:20.225947   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 05:29:20.241362   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 05:29:20.256890   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 05:29:20.272184   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 05:29:20.287280   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 05:29:20.302740   11129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 05:29:20.319943   11129 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 05:29:20.331150   11129 ssh_runner.go:195] Run: openssl version
	I1210 05:29:20.336590   11129 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:29:20.342971   11129 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 05:29:20.351722   11129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:29:20.354960   11129 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:29:20.355003   11129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:29:20.389415   11129 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 05:29:20.397123   11129 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 05:29:20.405555   11129 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 05:29:20.408849   11129 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 05:29:20.408899   11129 kubeadm.go:401] StartCluster: {Name:addons-193927 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-193927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:29:20.408969   11129 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:29:20.409033   11129 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:29:20.432690   11129 cri.go:89] found id: ""
	I1210 05:29:20.432737   11129 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 05:29:20.439620   11129 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 05:29:20.446705   11129 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 05:29:20.446742   11129 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 05:29:20.453499   11129 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 05:29:20.453520   11129 kubeadm.go:158] found existing configuration files:
	
	I1210 05:29:20.453548   11129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 05:29:20.460201   11129 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 05:29:20.460263   11129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 05:29:20.466822   11129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 05:29:20.473463   11129 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 05:29:20.473494   11129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 05:29:20.479959   11129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 05:29:20.486644   11129 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 05:29:20.486687   11129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 05:29:20.493040   11129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 05:29:20.499631   11129 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 05:29:20.499663   11129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 05:29:20.506122   11129 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 05:29:20.558111   11129 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1210 05:29:20.610165   11129 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 05:29:30.249205   11129 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1210 05:29:30.249298   11129 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 05:29:30.249409   11129 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 05:29:30.249478   11129 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1210 05:29:30.249520   11129 kubeadm.go:319] OS: Linux
	I1210 05:29:30.249565   11129 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 05:29:30.249607   11129 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 05:29:30.249674   11129 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 05:29:30.249752   11129 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 05:29:30.249822   11129 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 05:29:30.249890   11129 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 05:29:30.249957   11129 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 05:29:30.250018   11129 kubeadm.go:319] CGROUPS_IO: enabled
	I1210 05:29:30.250142   11129 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 05:29:30.250278   11129 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 05:29:30.250404   11129 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 05:29:30.250493   11129 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 05:29:30.251846   11129 out.go:252]   - Generating certificates and keys ...
	I1210 05:29:30.251907   11129 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 05:29:30.251982   11129 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 05:29:30.252054   11129 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 05:29:30.252141   11129 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 05:29:30.252206   11129 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 05:29:30.252251   11129 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 05:29:30.252307   11129 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 05:29:30.252421   11129 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-193927 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1210 05:29:30.252467   11129 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 05:29:30.252568   11129 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-193927 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1210 05:29:30.252630   11129 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 05:29:30.252695   11129 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 05:29:30.252744   11129 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 05:29:30.252788   11129 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 05:29:30.252833   11129 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 05:29:30.252880   11129 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 05:29:30.252922   11129 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 05:29:30.252977   11129 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 05:29:30.253065   11129 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 05:29:30.253194   11129 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 05:29:30.253278   11129 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 05:29:30.255254   11129 out.go:252]   - Booting up control plane ...
	I1210 05:29:30.255335   11129 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 05:29:30.255400   11129 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 05:29:30.255463   11129 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 05:29:30.255548   11129 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 05:29:30.255629   11129 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 05:29:30.255715   11129 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 05:29:30.255796   11129 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 05:29:30.255838   11129 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 05:29:30.255958   11129 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 05:29:30.256095   11129 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 05:29:30.256185   11129 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000744816s
	I1210 05:29:30.256264   11129 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 05:29:30.256365   11129 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1210 05:29:30.256449   11129 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 05:29:30.256522   11129 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 05:29:30.256591   11129 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004261137s
	I1210 05:29:30.256644   11129 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.698527335s
	I1210 05:29:30.256701   11129 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501267806s
	I1210 05:29:30.256809   11129 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 05:29:30.256935   11129 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 05:29:30.256989   11129 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 05:29:30.257165   11129 kubeadm.go:319] [mark-control-plane] Marking the node addons-193927 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 05:29:30.257215   11129 kubeadm.go:319] [bootstrap-token] Using token: tjsxdu.6ugkds5uf0q4rr7i
	I1210 05:29:30.258289   11129 out.go:252]   - Configuring RBAC rules ...
	I1210 05:29:30.258386   11129 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 05:29:30.258472   11129 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 05:29:30.258583   11129 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 05:29:30.258683   11129 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 05:29:30.258777   11129 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 05:29:30.258845   11129 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 05:29:30.258954   11129 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 05:29:30.259008   11129 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 05:29:30.259074   11129 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 05:29:30.259104   11129 kubeadm.go:319] 
	I1210 05:29:30.259189   11129 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 05:29:30.259203   11129 kubeadm.go:319] 
	I1210 05:29:30.259315   11129 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 05:29:30.259324   11129 kubeadm.go:319] 
	I1210 05:29:30.259359   11129 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 05:29:30.259443   11129 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 05:29:30.259524   11129 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 05:29:30.259532   11129 kubeadm.go:319] 
	I1210 05:29:30.259602   11129 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 05:29:30.259611   11129 kubeadm.go:319] 
	I1210 05:29:30.259678   11129 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 05:29:30.259687   11129 kubeadm.go:319] 
	I1210 05:29:30.259740   11129 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 05:29:30.259807   11129 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 05:29:30.259871   11129 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 05:29:30.259877   11129 kubeadm.go:319] 
	I1210 05:29:30.259941   11129 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 05:29:30.260006   11129 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 05:29:30.260012   11129 kubeadm.go:319] 
	I1210 05:29:30.260093   11129 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token tjsxdu.6ugkds5uf0q4rr7i \
	I1210 05:29:30.260229   11129 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fec42d2a7c02894c4f889fb8bc31e98283f3b1a3e3609cf9160b0c24109717cc \
	I1210 05:29:30.260276   11129 kubeadm.go:319] 	--control-plane 
	I1210 05:29:30.260290   11129 kubeadm.go:319] 
	I1210 05:29:30.260403   11129 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 05:29:30.260411   11129 kubeadm.go:319] 
	I1210 05:29:30.260479   11129 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token tjsxdu.6ugkds5uf0q4rr7i \
	I1210 05:29:30.260580   11129 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fec42d2a7c02894c4f889fb8bc31e98283f3b1a3e3609cf9160b0c24109717cc 
	I1210 05:29:30.260590   11129 cni.go:84] Creating CNI manager for ""
	I1210 05:29:30.260596   11129 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 05:29:30.261764   11129 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1210 05:29:30.263000   11129 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 05:29:30.266985   11129 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1210 05:29:30.267001   11129 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1210 05:29:30.278871   11129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 05:29:30.468975   11129 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 05:29:30.469093   11129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:30.469095   11129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-193927 minikube.k8s.io/updated_at=2025_12_10T05_29_30_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602 minikube.k8s.io/name=addons-193927 minikube.k8s.io/primary=true
	I1210 05:29:30.478546   11129 ops.go:34] apiserver oom_adj: -16
	I1210 05:29:30.545908   11129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:31.046178   11129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:31.546421   11129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:32.046819   11129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:32.546884   11129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:33.046185   11129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:33.546485   11129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:34.046321   11129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:34.545952   11129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:34.606669   11129 kubeadm.go:1114] duration metric: took 4.137641273s to wait for elevateKubeSystemPrivileges
	I1210 05:29:34.606706   11129 kubeadm.go:403] duration metric: took 14.197810043s to StartCluster
	I1210 05:29:34.606726   11129 settings.go:142] acquiring lock: {Name:mk8c38e27b37253ca8cb2a2adf6342f0db270902 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:34.606842   11129 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 05:29:34.607233   11129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/kubeconfig: {Name:mkfa60e97179780abb80cddf99075aed36301884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:34.607431   11129 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 05:29:34.607451   11129 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 05:29:34.607512   11129 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1210 05:29:34.607651   11129 addons.go:70] Setting yakd=true in profile "addons-193927"
	I1210 05:29:34.607668   11129 addons.go:70] Setting ingress-dns=true in profile "addons-193927"
	I1210 05:29:34.607687   11129 addons.go:70] Setting inspektor-gadget=true in profile "addons-193927"
	I1210 05:29:34.607683   11129 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-193927"
	I1210 05:29:34.607678   11129 addons.go:239] Setting addon yakd=true in "addons-193927"
	I1210 05:29:34.607705   11129 addons.go:70] Setting metrics-server=true in profile "addons-193927"
	I1210 05:29:34.607701   11129 addons.go:70] Setting gcp-auth=true in profile "addons-193927"
	I1210 05:29:34.607718   11129 addons.go:239] Setting addon metrics-server=true in "addons-193927"
	I1210 05:29:34.607726   11129 mustload.go:66] Loading cluster: addons-193927
	I1210 05:29:34.607743   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:34.607747   11129 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-193927"
	I1210 05:29:34.607750   11129 addons.go:70] Setting ingress=true in profile "addons-193927"
	I1210 05:29:34.607762   11129 addons.go:239] Setting addon ingress=true in "addons-193927"
	I1210 05:29:34.607760   11129 addons.go:70] Setting storage-provisioner=true in profile "addons-193927"
	I1210 05:29:34.607773   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:34.607778   11129 addons.go:239] Setting addon storage-provisioner=true in "addons-193927"
	I1210 05:29:34.607784   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:34.607800   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:34.607802   11129 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-193927"
	I1210 05:29:34.607820   11129 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-193927"
	I1210 05:29:34.607844   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:34.607846   11129 addons.go:70] Setting default-storageclass=true in profile "addons-193927"
	I1210 05:29:34.607861   11129 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-193927"
	I1210 05:29:34.607919   11129 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:29:34.608156   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.608216   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.608266   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.608268   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.608277   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.608281   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.608314   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.608370   11129 addons.go:70] Setting registry=true in profile "addons-193927"
	I1210 05:29:34.608389   11129 addons.go:239] Setting addon registry=true in "addons-193927"
	I1210 05:29:34.608417   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:34.608836   11129 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-193927"
	I1210 05:29:34.608857   11129 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-193927"
	I1210 05:29:34.608882   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:34.608894   11129 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-193927"
	I1210 05:29:34.608913   11129 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-193927"
	I1210 05:29:34.609202   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.607740   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:34.609346   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.609561   11129 out.go:179] * Verifying Kubernetes components...
	I1210 05:29:34.609644   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.610073   11129 addons.go:70] Setting volumesnapshots=true in profile "addons-193927"
	I1210 05:29:34.610113   11129 addons.go:239] Setting addon volumesnapshots=true in "addons-193927"
	I1210 05:29:34.610138   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:34.610663   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.619344   11129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:29:34.619729   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.607690   11129 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:29:34.607699   11129 addons.go:239] Setting addon ingress-dns=true in "addons-193927"
	I1210 05:29:34.620021   11129 addons.go:70] Setting cloud-spanner=true in profile "addons-193927"
	I1210 05:29:34.620024   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:34.620040   11129 addons.go:239] Setting addon cloud-spanner=true in "addons-193927"
	I1210 05:29:34.620097   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:34.607698   11129 addons.go:239] Setting addon inspektor-gadget=true in "addons-193927"
	I1210 05:29:34.620391   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:34.620417   11129 addons.go:70] Setting registry-creds=true in profile "addons-193927"
	I1210 05:29:34.620436   11129 addons.go:239] Setting addon registry-creds=true in "addons-193927"
	I1210 05:29:34.620461   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:34.620551   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.620858   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.620905   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.608882   11129 addons.go:70] Setting volcano=true in profile "addons-193927"
	I1210 05:29:34.625203   11129 addons.go:239] Setting addon volcano=true in "addons-193927"
	I1210 05:29:34.625244   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:34.625491   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.625651   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.643859   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:34.667498   11129 addons.go:239] Setting addon default-storageclass=true in "addons-193927"
	I1210 05:29:34.667607   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:34.668251   11129 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1210 05:29:34.668343   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.668378   11129 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1210 05:29:34.669381   11129 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1210 05:29:34.669397   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1210 05:29:34.669445   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:34.671164   11129 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1210 05:29:34.673586   11129 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1210 05:29:34.674484   11129 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1210 05:29:34.674494   11129 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 05:29:34.674536   11129 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 05:29:34.674588   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:34.676283   11129 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1210 05:29:34.677691   11129 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1210 05:29:34.680581   11129 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1210 05:29:34.681657   11129 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1210 05:29:34.684470   11129 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1210 05:29:34.686101   11129 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1210 05:29:34.686116   11129 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1210 05:29:34.686203   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:34.686306   11129 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1210 05:29:34.687468   11129 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1210 05:29:34.687666   11129 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1210 05:29:34.687678   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1210 05:29:34.687731   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:34.688924   11129 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1210 05:29:34.688940   11129 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1210 05:29:34.688991   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:34.695343   11129 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1210 05:29:34.699280   11129 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1210 05:29:34.699299   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1210 05:29:34.699355   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	W1210 05:29:34.706425   11129 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1210 05:29:34.711509   11129 out.go:179]   - Using image docker.io/registry:3.0.0
	I1210 05:29:34.713505   11129 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1210 05:29:34.714641   11129 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1210 05:29:34.714656   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1210 05:29:34.714714   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:34.720191   11129 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1210 05:29:34.721442   11129 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 05:29:34.722429   11129 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 05:29:34.723646   11129 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1210 05:29:34.723781   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1210 05:29:34.723905   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:34.728242   11129 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:34.729702   11129 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:29:34.729827   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 05:29:34.730042   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:34.737112   11129 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1210 05:29:34.738833   11129 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1210 05:29:34.738958   11129 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1210 05:29:34.739119   11129 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1210 05:29:34.739134   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1210 05:29:34.739288   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:34.740105   11129 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1210 05:29:34.740121   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1210 05:29:34.740299   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:34.740643   11129 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1210 05:29:34.740658   11129 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1210 05:29:34.740705   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:34.740386   11129 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-193927"
	I1210 05:29:34.740864   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:34.741974   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:34.743975   11129 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1210 05:29:34.745523   11129 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1210 05:29:34.745575   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1210 05:29:34.745683   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:34.749332   11129 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 05:29:34.773162   11129 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 05:29:34.773185   11129 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 05:29:34.773273   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:34.774130   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:34.787858   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:34.797846   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:34.798426   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:34.798989   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:34.799408   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:34.799890   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:34.804647   11129 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1210 05:29:34.806350   11129 out.go:179]   - Using image docker.io/busybox:stable
	I1210 05:29:34.807370   11129 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1210 05:29:34.807388   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1210 05:29:34.807447   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:34.808760   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:34.818890   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:34.827054   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:34.834299   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:34.835161   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:34.835668   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:34.839036   11129 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:29:34.839247   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	W1210 05:29:34.854872   11129 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1210 05:29:34.854926   11129 retry.go:31] will retry after 368.340878ms: ssh: handshake failed: EOF
	I1210 05:29:34.855729   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	W1210 05:29:34.857363   11129 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1210 05:29:34.857528   11129 retry.go:31] will retry after 193.849913ms: ssh: handshake failed: EOF
	I1210 05:29:34.964149   11129 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 05:29:34.964483   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1210 05:29:34.966351   11129 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1210 05:29:34.966450   11129 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1210 05:29:34.976133   11129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1210 05:29:34.976134   11129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1210 05:29:34.987493   11129 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 05:29:34.987516   11129 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 05:29:34.990032   11129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1210 05:29:34.991198   11129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1210 05:29:34.995572   11129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1210 05:29:34.996004   11129 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1210 05:29:34.996025   11129 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1210 05:29:34.999253   11129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1210 05:29:35.003076   11129 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1210 05:29:35.003105   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1210 05:29:35.004137   11129 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1210 05:29:35.004155   11129 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1210 05:29:35.014274   11129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:29:35.033366   11129 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1210 05:29:35.033398   11129 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1210 05:29:35.038460   11129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:29:35.043396   11129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1210 05:29:35.049204   11129 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 05:29:35.049227   11129 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 05:29:35.081756   11129 node_ready.go:35] waiting up to 6m0s for node "addons-193927" to be "Ready" ...
	I1210 05:29:35.082016   11129 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1210 05:29:35.083035   11129 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1210 05:29:35.083054   11129 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1210 05:29:35.086459   11129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1210 05:29:35.102428   11129 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1210 05:29:35.102451   11129 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1210 05:29:35.118435   11129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 05:29:35.146561   11129 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1210 05:29:35.146590   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1210 05:29:35.156966   11129 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1210 05:29:35.156997   11129 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1210 05:29:35.207664   11129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1210 05:29:35.215348   11129 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1210 05:29:35.215376   11129 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1210 05:29:35.283865   11129 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1210 05:29:35.283896   11129 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1210 05:29:35.311120   11129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1210 05:29:35.348796   11129 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1210 05:29:35.349157   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1210 05:29:35.395641   11129 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1210 05:29:35.395690   11129 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1210 05:29:35.435522   11129 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1210 05:29:35.435542   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1210 05:29:35.482207   11129 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1210 05:29:35.482303   11129 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1210 05:29:35.489963   11129 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1210 05:29:35.490027   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1210 05:29:35.526187   11129 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1210 05:29:35.526212   11129 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1210 05:29:35.527052   11129 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1210 05:29:35.527073   11129 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1210 05:29:35.580257   11129 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1210 05:29:35.580297   11129 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1210 05:29:35.586760   11129 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-193927" context rescaled to 1 replicas
	I1210 05:29:35.590211   11129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1210 05:29:35.608206   11129 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1210 05:29:35.608233   11129 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1210 05:29:35.638264   11129 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 05:29:35.638301   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1210 05:29:35.680874   11129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 05:29:36.220064   11129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.22881525s)
	I1210 05:29:36.220169   11129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.230098573s)
	I1210 05:29:36.220192   11129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.224593126s)
	I1210 05:29:36.220199   11129 addons.go:495] Verifying addon ingress=true in "addons-193927"
	I1210 05:29:36.220242   11129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.220969444s)
	I1210 05:29:36.220326   11129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.206025779s)
	I1210 05:29:36.220374   11129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.181892806s)
	I1210 05:29:36.220499   11129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.177078114s)
	I1210 05:29:36.220593   11129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.134107135s)
	I1210 05:29:36.220622   11129 addons.go:495] Verifying addon registry=true in "addons-193927"
	I1210 05:29:36.220698   11129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.102228073s)
	I1210 05:29:36.220721   11129 addons.go:495] Verifying addon metrics-server=true in "addons-193927"
	I1210 05:29:36.220780   11129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.013083589s)
	I1210 05:29:36.223161   11129 out.go:179] * Verifying ingress addon...
	I1210 05:29:36.223199   11129 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-193927 service yakd-dashboard -n yakd-dashboard
	
	I1210 05:29:36.223325   11129 out.go:179] * Verifying registry addon...
	I1210 05:29:36.225052   11129 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1210 05:29:36.228289   11129 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1210 05:29:36.234367   11129 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1210 05:29:36.234395   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:36.235112   11129 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	W1210 05:29:36.235317   11129 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1210 05:29:36.542985   11129 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-193927"
	I1210 05:29:36.545212   11129 out.go:179] * Verifying csi-hostpath-driver addon...
	I1210 05:29:36.548136   11129 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1210 05:29:36.551477   11129 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1210 05:29:36.551495   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:36.728684   11129 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1210 05:29:36.728708   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:36.730669   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:36.950600   11129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.26967317s)
	W1210 05:29:36.950649   11129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1210 05:29:36.950672   11129 retry.go:31] will retry after 319.039602ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1210 05:29:37.051332   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:29:37.084499   11129 node_ready.go:57] node "addons-193927" has "Ready":"False" status (will retry)
	I1210 05:29:37.228328   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:37.230238   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:37.270866   11129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 05:29:37.550900   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:37.727633   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:37.730228   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:38.051546   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:38.227799   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:38.230339   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:38.550326   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:38.727623   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:38.729999   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:39.051637   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:39.227479   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:39.230911   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:39.550361   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:29:39.584155   11129 node_ready.go:57] node "addons-193927" has "Ready":"False" status (will retry)
	I1210 05:29:39.694664   11129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.423758478s)
	I1210 05:29:39.728402   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:39.730099   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:40.051157   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:40.228307   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:40.230211   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:40.550898   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:40.728701   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:40.730133   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:41.051352   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:41.227909   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:41.230390   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:41.550745   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:29:41.584649   11129 node_ready.go:57] node "addons-193927" has "Ready":"False" status (will retry)
	I1210 05:29:41.727793   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:41.730348   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:42.051068   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:42.228567   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:42.229952   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:42.260794   11129 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1210 05:29:42.260858   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:42.278101   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:42.386960   11129 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1210 05:29:42.398736   11129 addons.go:239] Setting addon gcp-auth=true in "addons-193927"
	I1210 05:29:42.398792   11129 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:29:42.399278   11129 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:29:42.415667   11129 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1210 05:29:42.415716   11129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:29:42.431461   11129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:29:42.522647   11129 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 05:29:42.523818   11129 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1210 05:29:42.524862   11129 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1210 05:29:42.524872   11129 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1210 05:29:42.536719   11129 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1210 05:29:42.536735   11129 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1210 05:29:42.548294   11129 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1210 05:29:42.548310   11129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1210 05:29:42.550969   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:42.559836   11129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1210 05:29:42.728918   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:42.730584   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:42.845443   11129 addons.go:495] Verifying addon gcp-auth=true in "addons-193927"
	I1210 05:29:42.847598   11129 out.go:179] * Verifying gcp-auth addon...
	I1210 05:29:42.849336   11129 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1210 05:29:42.851460   11129 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1210 05:29:42.851478   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:43.051461   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:43.227542   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:43.230141   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:43.352241   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:43.550466   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:43.727988   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:43.730756   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:43.852191   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:44.050878   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:29:44.083645   11129 node_ready.go:57] node "addons-193927" has "Ready":"False" status (will retry)
	I1210 05:29:44.227859   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:44.230354   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:44.351327   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:44.550914   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:44.728118   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:44.730680   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:44.851719   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:45.051405   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:45.228124   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:45.230708   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:45.351887   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:45.551358   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:45.727800   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:45.730353   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:45.851682   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:46.051127   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:29:46.084384   11129 node_ready.go:57] node "addons-193927" has "Ready":"False" status (will retry)
	I1210 05:29:46.228191   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:46.230845   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:46.352490   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:46.551300   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:46.727712   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:46.730507   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:46.851578   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:47.051072   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:47.228622   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:47.230045   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:47.352244   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:47.550943   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:47.728438   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:47.729989   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:47.852223   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:48.050750   11129 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1210 05:29:48.050769   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:48.084173   11129 node_ready.go:49] node "addons-193927" is "Ready"
	I1210 05:29:48.084196   11129 node_ready.go:38] duration metric: took 13.002414459s for node "addons-193927" to be "Ready" ...
	I1210 05:29:48.084207   11129 api_server.go:52] waiting for apiserver process to appear ...
	I1210 05:29:48.084256   11129 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:29:48.097808   11129 api_server.go:72] duration metric: took 13.490322663s to wait for apiserver process to appear ...
	I1210 05:29:48.097831   11129 api_server.go:88] waiting for apiserver healthz status ...
	I1210 05:29:48.097853   11129 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1210 05:29:48.103652   11129 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1210 05:29:48.104664   11129 api_server.go:141] control plane version: v1.34.3
	I1210 05:29:48.104694   11129 api_server.go:131] duration metric: took 6.855343ms to wait for apiserver health ...
	I1210 05:29:48.104704   11129 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 05:29:48.152545   11129 system_pods.go:59] 20 kube-system pods found
	I1210 05:29:48.152598   11129 system_pods.go:61] "amd-gpu-device-plugin-742mx" [8a174135-0be6-4c4b-900b-8903ba2adc24] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1210 05:29:48.152616   11129 system_pods.go:61] "coredns-66bc5c9577-fk5gt" [d1c80236-6e29-4ae6-8ad1-485df1e1bfab] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 05:29:48.152626   11129 system_pods.go:61] "csi-hostpath-attacher-0" [ce71965d-9049-4bf6-bd66-bd98d7a4127a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 05:29:48.152642   11129 system_pods.go:61] "csi-hostpath-resizer-0" [32f70c5b-660b-4628-8d60-0ea70a49b757] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 05:29:48.152653   11129 system_pods.go:61] "csi-hostpathplugin-2wcqc" [a80c278d-1d63-4bd4-b523-62ee1f159b04] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 05:29:48.152659   11129 system_pods.go:61] "etcd-addons-193927" [b3c67699-f7b0-40cc-98cd-c77ea32761b4] Running
	I1210 05:29:48.152665   11129 system_pods.go:61] "kindnet-bbr2p" [f3461857-22c6-4ae5-8b26-76e99f47451a] Running
	I1210 05:29:48.152672   11129 system_pods.go:61] "kube-apiserver-addons-193927" [3e6b034c-b192-45aa-a54f-10f7e28490eb] Running
	I1210 05:29:48.152678   11129 system_pods.go:61] "kube-controller-manager-addons-193927" [4c54a340-100e-4217-85d4-a3b57633f6c3] Running
	I1210 05:29:48.152692   11129 system_pods.go:61] "kube-ingress-dns-minikube" [feacfcc2-2a0d-4dec-b90a-5c4330f27a71] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 05:29:48.152698   11129 system_pods.go:61] "kube-proxy-j2r54" [a1f05555-cdab-40ab-bb48-154e17085601] Running
	I1210 05:29:48.152704   11129 system_pods.go:61] "kube-scheduler-addons-193927" [2a53e46d-574f-4dbb-be04-1350d03488d3] Running
	I1210 05:29:48.152711   11129 system_pods.go:61] "metrics-server-85b7d694d7-xswrz" [c2b984ad-8af5-448a-8db7-1a2a5e4cff81] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 05:29:48.152722   11129 system_pods.go:61] "nvidia-device-plugin-daemonset-zdg7v" [c81ec6a5-eddd-4f24-b3f6-22fedc2f79b1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 05:29:48.152731   11129 system_pods.go:61] "registry-6b586f9694-h4d7x" [273724ba-34a3-45fb-bcdf-0ec690ef2c3d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 05:29:48.152743   11129 system_pods.go:61] "registry-creds-764b6fb674-ghgkh" [9369e244-75eb-4b63-883e-0cb1e1d332eb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 05:29:48.152761   11129 system_pods.go:61] "registry-proxy-jr8xs" [a219a574-aafe-429a-ae24-5e8f21f31910] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 05:29:48.152773   11129 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4ckl2" [5f3ee222-7cbb-4203-8242-d5b455a479c1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:29:48.152785   11129 system_pods.go:61] "snapshot-controller-7d9fbc56b8-v87tq" [aa9e8535-5f58-4fb0-af9b-70424f23d191] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:29:48.152794   11129 system_pods.go:61] "storage-provisioner" [fe12c783-3d73-4b2a-9583-730f2fdba136] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 05:29:48.152806   11129 system_pods.go:74] duration metric: took 48.094327ms to wait for pod list to return data ...
	I1210 05:29:48.152819   11129 default_sa.go:34] waiting for default service account to be created ...
	I1210 05:29:48.159844   11129 default_sa.go:45] found service account: "default"
	I1210 05:29:48.159872   11129 default_sa.go:55] duration metric: took 7.046334ms for default service account to be created ...
	I1210 05:29:48.159885   11129 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 05:29:48.252276   11129 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1210 05:29:48.252301   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:48.252718   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:48.253896   11129 system_pods.go:86] 20 kube-system pods found
	I1210 05:29:48.253929   11129 system_pods.go:89] "amd-gpu-device-plugin-742mx" [8a174135-0be6-4c4b-900b-8903ba2adc24] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1210 05:29:48.253942   11129 system_pods.go:89] "coredns-66bc5c9577-fk5gt" [d1c80236-6e29-4ae6-8ad1-485df1e1bfab] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 05:29:48.253951   11129 system_pods.go:89] "csi-hostpath-attacher-0" [ce71965d-9049-4bf6-bd66-bd98d7a4127a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 05:29:48.253959   11129 system_pods.go:89] "csi-hostpath-resizer-0" [32f70c5b-660b-4628-8d60-0ea70a49b757] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 05:29:48.253971   11129 system_pods.go:89] "csi-hostpathplugin-2wcqc" [a80c278d-1d63-4bd4-b523-62ee1f159b04] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 05:29:48.253977   11129 system_pods.go:89] "etcd-addons-193927" [b3c67699-f7b0-40cc-98cd-c77ea32761b4] Running
	I1210 05:29:48.253988   11129 system_pods.go:89] "kindnet-bbr2p" [f3461857-22c6-4ae5-8b26-76e99f47451a] Running
	I1210 05:29:48.253994   11129 system_pods.go:89] "kube-apiserver-addons-193927" [3e6b034c-b192-45aa-a54f-10f7e28490eb] Running
	I1210 05:29:48.254003   11129 system_pods.go:89] "kube-controller-manager-addons-193927" [4c54a340-100e-4217-85d4-a3b57633f6c3] Running
	I1210 05:29:48.254010   11129 system_pods.go:89] "kube-ingress-dns-minikube" [feacfcc2-2a0d-4dec-b90a-5c4330f27a71] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 05:29:48.254018   11129 system_pods.go:89] "kube-proxy-j2r54" [a1f05555-cdab-40ab-bb48-154e17085601] Running
	I1210 05:29:48.254025   11129 system_pods.go:89] "kube-scheduler-addons-193927" [2a53e46d-574f-4dbb-be04-1350d03488d3] Running
	I1210 05:29:48.254035   11129 system_pods.go:89] "metrics-server-85b7d694d7-xswrz" [c2b984ad-8af5-448a-8db7-1a2a5e4cff81] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 05:29:48.254043   11129 system_pods.go:89] "nvidia-device-plugin-daemonset-zdg7v" [c81ec6a5-eddd-4f24-b3f6-22fedc2f79b1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 05:29:48.254051   11129 system_pods.go:89] "registry-6b586f9694-h4d7x" [273724ba-34a3-45fb-bcdf-0ec690ef2c3d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 05:29:48.254063   11129 system_pods.go:89] "registry-creds-764b6fb674-ghgkh" [9369e244-75eb-4b63-883e-0cb1e1d332eb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 05:29:48.254074   11129 system_pods.go:89] "registry-proxy-jr8xs" [a219a574-aafe-429a-ae24-5e8f21f31910] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 05:29:48.254094   11129 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4ckl2" [5f3ee222-7cbb-4203-8242-d5b455a479c1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:29:48.254107   11129 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v87tq" [aa9e8535-5f58-4fb0-af9b-70424f23d191] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:29:48.254119   11129 system_pods.go:89] "storage-provisioner" [fe12c783-3d73-4b2a-9583-730f2fdba136] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 05:29:48.254137   11129 retry.go:31] will retry after 255.495687ms: missing components: kube-dns
	I1210 05:29:48.352752   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:48.514454   11129 system_pods.go:86] 20 kube-system pods found
	I1210 05:29:48.514498   11129 system_pods.go:89] "amd-gpu-device-plugin-742mx" [8a174135-0be6-4c4b-900b-8903ba2adc24] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1210 05:29:48.514509   11129 system_pods.go:89] "coredns-66bc5c9577-fk5gt" [d1c80236-6e29-4ae6-8ad1-485df1e1bfab] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 05:29:48.514519   11129 system_pods.go:89] "csi-hostpath-attacher-0" [ce71965d-9049-4bf6-bd66-bd98d7a4127a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 05:29:48.514528   11129 system_pods.go:89] "csi-hostpath-resizer-0" [32f70c5b-660b-4628-8d60-0ea70a49b757] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 05:29:48.514544   11129 system_pods.go:89] "csi-hostpathplugin-2wcqc" [a80c278d-1d63-4bd4-b523-62ee1f159b04] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 05:29:48.514550   11129 system_pods.go:89] "etcd-addons-193927" [b3c67699-f7b0-40cc-98cd-c77ea32761b4] Running
	I1210 05:29:48.514556   11129 system_pods.go:89] "kindnet-bbr2p" [f3461857-22c6-4ae5-8b26-76e99f47451a] Running
	I1210 05:29:48.514562   11129 system_pods.go:89] "kube-apiserver-addons-193927" [3e6b034c-b192-45aa-a54f-10f7e28490eb] Running
	I1210 05:29:48.514567   11129 system_pods.go:89] "kube-controller-manager-addons-193927" [4c54a340-100e-4217-85d4-a3b57633f6c3] Running
	I1210 05:29:48.514576   11129 system_pods.go:89] "kube-ingress-dns-minikube" [feacfcc2-2a0d-4dec-b90a-5c4330f27a71] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 05:29:48.514579   11129 system_pods.go:89] "kube-proxy-j2r54" [a1f05555-cdab-40ab-bb48-154e17085601] Running
	I1210 05:29:48.514584   11129 system_pods.go:89] "kube-scheduler-addons-193927" [2a53e46d-574f-4dbb-be04-1350d03488d3] Running
	I1210 05:29:48.514591   11129 system_pods.go:89] "metrics-server-85b7d694d7-xswrz" [c2b984ad-8af5-448a-8db7-1a2a5e4cff81] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 05:29:48.514606   11129 system_pods.go:89] "nvidia-device-plugin-daemonset-zdg7v" [c81ec6a5-eddd-4f24-b3f6-22fedc2f79b1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 05:29:48.514617   11129 system_pods.go:89] "registry-6b586f9694-h4d7x" [273724ba-34a3-45fb-bcdf-0ec690ef2c3d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 05:29:48.514625   11129 system_pods.go:89] "registry-creds-764b6fb674-ghgkh" [9369e244-75eb-4b63-883e-0cb1e1d332eb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 05:29:48.514632   11129 system_pods.go:89] "registry-proxy-jr8xs" [a219a574-aafe-429a-ae24-5e8f21f31910] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 05:29:48.514640   11129 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4ckl2" [5f3ee222-7cbb-4203-8242-d5b455a479c1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:29:48.514649   11129 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v87tq" [aa9e8535-5f58-4fb0-af9b-70424f23d191] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:29:48.514658   11129 system_pods.go:89] "storage-provisioner" [fe12c783-3d73-4b2a-9583-730f2fdba136] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 05:29:48.514678   11129 retry.go:31] will retry after 342.561952ms: missing components: kube-dns
	I1210 05:29:48.551503   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:48.728634   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:48.730689   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:48.852804   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:48.861917   11129 system_pods.go:86] 20 kube-system pods found
	I1210 05:29:48.861954   11129 system_pods.go:89] "amd-gpu-device-plugin-742mx" [8a174135-0be6-4c4b-900b-8903ba2adc24] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1210 05:29:48.861962   11129 system_pods.go:89] "coredns-66bc5c9577-fk5gt" [d1c80236-6e29-4ae6-8ad1-485df1e1bfab] Running
	I1210 05:29:48.861974   11129 system_pods.go:89] "csi-hostpath-attacher-0" [ce71965d-9049-4bf6-bd66-bd98d7a4127a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 05:29:48.861996   11129 system_pods.go:89] "csi-hostpath-resizer-0" [32f70c5b-660b-4628-8d60-0ea70a49b757] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 05:29:48.862009   11129 system_pods.go:89] "csi-hostpathplugin-2wcqc" [a80c278d-1d63-4bd4-b523-62ee1f159b04] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 05:29:48.862014   11129 system_pods.go:89] "etcd-addons-193927" [b3c67699-f7b0-40cc-98cd-c77ea32761b4] Running
	I1210 05:29:48.862022   11129 system_pods.go:89] "kindnet-bbr2p" [f3461857-22c6-4ae5-8b26-76e99f47451a] Running
	I1210 05:29:48.862028   11129 system_pods.go:89] "kube-apiserver-addons-193927" [3e6b034c-b192-45aa-a54f-10f7e28490eb] Running
	I1210 05:29:48.862036   11129 system_pods.go:89] "kube-controller-manager-addons-193927" [4c54a340-100e-4217-85d4-a3b57633f6c3] Running
	I1210 05:29:48.862044   11129 system_pods.go:89] "kube-ingress-dns-minikube" [feacfcc2-2a0d-4dec-b90a-5c4330f27a71] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 05:29:48.862053   11129 system_pods.go:89] "kube-proxy-j2r54" [a1f05555-cdab-40ab-bb48-154e17085601] Running
	I1210 05:29:48.862059   11129 system_pods.go:89] "kube-scheduler-addons-193927" [2a53e46d-574f-4dbb-be04-1350d03488d3] Running
	I1210 05:29:48.862070   11129 system_pods.go:89] "metrics-server-85b7d694d7-xswrz" [c2b984ad-8af5-448a-8db7-1a2a5e4cff81] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 05:29:48.862091   11129 system_pods.go:89] "nvidia-device-plugin-daemonset-zdg7v" [c81ec6a5-eddd-4f24-b3f6-22fedc2f79b1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 05:29:48.862106   11129 system_pods.go:89] "registry-6b586f9694-h4d7x" [273724ba-34a3-45fb-bcdf-0ec690ef2c3d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 05:29:48.862117   11129 system_pods.go:89] "registry-creds-764b6fb674-ghgkh" [9369e244-75eb-4b63-883e-0cb1e1d332eb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 05:29:48.862131   11129 system_pods.go:89] "registry-proxy-jr8xs" [a219a574-aafe-429a-ae24-5e8f21f31910] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 05:29:48.862142   11129 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4ckl2" [5f3ee222-7cbb-4203-8242-d5b455a479c1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:29:48.862154   11129 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v87tq" [aa9e8535-5f58-4fb0-af9b-70424f23d191] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:29:48.862163   11129 system_pods.go:89] "storage-provisioner" [fe12c783-3d73-4b2a-9583-730f2fdba136] Running
	I1210 05:29:48.862175   11129 system_pods.go:126] duration metric: took 702.282552ms to wait for k8s-apps to be running ...
	I1210 05:29:48.862187   11129 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 05:29:48.862238   11129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 05:29:48.879197   11129 system_svc.go:56] duration metric: took 17.001637ms WaitForService to wait for kubelet
	I1210 05:29:48.879234   11129 kubeadm.go:587] duration metric: took 14.271744774s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 05:29:48.879256   11129 node_conditions.go:102] verifying NodePressure condition ...
	I1210 05:29:48.882406   11129 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 05:29:48.882441   11129 node_conditions.go:123] node cpu capacity is 8
	I1210 05:29:48.882459   11129 node_conditions.go:105] duration metric: took 3.198011ms to run NodePressure ...
	I1210 05:29:48.882476   11129 start.go:242] waiting for startup goroutines ...
	I1210 05:29:49.051614   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:49.228358   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:49.230562   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:49.352107   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:49.551726   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:49.728722   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:49.730323   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:49.853805   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:50.053242   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:50.229973   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:50.231896   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:50.352980   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:50.552445   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:50.728333   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:50.730610   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:50.853136   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:51.052329   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:51.228333   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:51.231010   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:51.353061   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:51.552203   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:51.727730   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:51.730451   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:51.851763   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:52.051381   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:52.229054   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:52.231391   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:52.354370   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:52.552591   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:52.731205   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:52.732897   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:52.852912   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:53.052160   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:53.228246   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:53.231418   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:53.352198   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:53.552064   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:53.747367   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:53.747659   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:53.852467   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:54.062448   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:54.228677   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:54.230309   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:54.353479   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:54.551650   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:54.728433   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:54.730573   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:54.852456   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:55.051738   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:55.228319   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:55.230284   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:55.353158   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:55.552387   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:55.728682   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:55.730581   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:55.852645   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:56.051480   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:56.228198   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:56.230821   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:56.352680   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:56.551458   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:56.728310   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:56.730318   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:56.853068   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:57.052528   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:57.228586   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:57.230495   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:57.351885   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:57.551880   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:57.728705   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:57.730235   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:57.853016   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:58.051945   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:58.228297   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:58.231058   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:58.352764   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:58.551728   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:58.730467   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:58.730952   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:58.852276   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:59.052147   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:59.229151   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:59.230925   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:59.352704   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:29:59.551833   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:29:59.728778   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:59.730473   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:59.852420   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:00.051639   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:00.229286   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:00.231126   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:00.353496   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:00.551635   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:00.728020   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:00.730659   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:00.852030   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:01.052298   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:01.228971   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:01.231032   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:01.352870   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:01.552056   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:01.729206   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:01.731529   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:01.852375   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:02.051628   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:02.228818   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:02.230680   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:02.352107   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:02.550851   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:02.728645   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:02.730534   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:02.852259   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:03.050921   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:03.228367   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:03.230340   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:03.352894   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:03.552929   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:03.729113   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:03.731045   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:03.853350   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:04.051685   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:04.228568   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:04.230826   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:04.351920   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:04.551450   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:04.727877   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:04.730437   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:04.852245   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:05.051846   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:05.228886   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:05.230636   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:05.352798   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:05.551689   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:05.728467   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:05.730216   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:05.852377   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:06.051174   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:06.227834   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:06.230898   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:06.352650   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:06.552354   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:06.729860   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:06.731733   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:06.852134   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:07.052242   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:07.228758   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:07.230307   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:07.351609   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:07.551794   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:07.728501   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:07.730521   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:07.852263   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:08.051103   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:08.228233   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:08.230642   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:08.351807   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:08.551647   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:08.728693   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:08.730873   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:08.852391   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:09.051353   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:09.227840   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:09.230355   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:09.351480   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:09.551197   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:09.728325   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:09.730112   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:09.852867   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:10.051912   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:10.229959   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:10.231052   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:10.352865   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:10.551713   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:10.728311   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:10.730155   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:10.852948   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:11.052165   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:11.228985   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:11.230930   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:11.352866   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:11.551661   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:11.728309   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:11.730198   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:11.852529   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:12.051439   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:12.228192   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:12.230761   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:12.352612   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:12.552430   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:12.728344   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:12.730175   11129 kapi.go:107] duration metric: took 36.501884683s to wait for kubernetes.io/minikube-addons=registry ...
	I1210 05:30:12.853201   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:13.052019   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:13.229372   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:13.352526   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:13.551003   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:13.728581   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:13.851430   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:14.051198   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:14.228589   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:14.351586   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:14.551541   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:14.727721   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:14.851860   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:15.051307   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:15.227378   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:15.352624   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:15.551006   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:15.728446   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:15.851468   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:16.051305   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:16.228303   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:16.353249   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:16.552012   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:16.729070   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:16.852533   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:17.051366   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:17.229027   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:17.352763   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:17.552156   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:17.729208   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:17.852516   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:18.051635   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:18.228524   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:18.355262   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:18.553660   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:18.729992   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:18.853016   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:19.052308   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:19.228138   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:19.353265   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:19.551421   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:19.730521   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:19.852163   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:20.051149   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:20.228804   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:20.352448   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:20.551925   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:20.728254   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:20.852505   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:21.051607   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:21.228534   11129 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:21.352218   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:21.552153   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:21.729459   11129 kapi.go:107] duration metric: took 45.50440368s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1210 05:30:21.852412   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:22.051396   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:22.352794   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:22.552012   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:22.853173   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:23.054678   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:23.353071   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:23.670867   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:23.851972   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:24.052435   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:24.351928   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:24.552561   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:24.852985   11129 kapi.go:107] duration metric: took 42.003643985s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1210 05:30:24.858215   11129 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-193927 cluster.
	I1210 05:30:24.859612   11129 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1210 05:30:24.860752   11129 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1210 05:30:25.052226   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:25.551209   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:26.051271   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:26.551508   11129 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:27.051618   11129 kapi.go:107] duration metric: took 50.503482475s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1210 05:30:27.053192   11129 out.go:179] * Enabled addons: registry-creds, ingress-dns, amd-gpu-device-plugin, cloud-spanner, nvidia-device-plugin, storage-provisioner, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1210 05:30:27.054164   11129 addons.go:530] duration metric: took 52.446657211s for enable addons: enabled=[registry-creds ingress-dns amd-gpu-device-plugin cloud-spanner nvidia-device-plugin storage-provisioner inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1210 05:30:27.054197   11129 start.go:247] waiting for cluster config update ...
	I1210 05:30:27.054218   11129 start.go:256] writing updated cluster config ...
	I1210 05:30:27.054477   11129 ssh_runner.go:195] Run: rm -f paused
	I1210 05:30:27.058315   11129 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 05:30:27.060760   11129 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fk5gt" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:30:27.064125   11129 pod_ready.go:94] pod "coredns-66bc5c9577-fk5gt" is "Ready"
	I1210 05:30:27.064141   11129 pod_ready.go:86] duration metric: took 3.362812ms for pod "coredns-66bc5c9577-fk5gt" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:30:27.065571   11129 pod_ready.go:83] waiting for pod "etcd-addons-193927" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:30:27.068686   11129 pod_ready.go:94] pod "etcd-addons-193927" is "Ready"
	I1210 05:30:27.068706   11129 pod_ready.go:86] duration metric: took 3.118554ms for pod "etcd-addons-193927" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:30:27.070431   11129 pod_ready.go:83] waiting for pod "kube-apiserver-addons-193927" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:30:27.073663   11129 pod_ready.go:94] pod "kube-apiserver-addons-193927" is "Ready"
	I1210 05:30:27.073679   11129 pod_ready.go:86] duration metric: took 3.231055ms for pod "kube-apiserver-addons-193927" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:30:27.075223   11129 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-193927" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:30:27.461287   11129 pod_ready.go:94] pod "kube-controller-manager-addons-193927" is "Ready"
	I1210 05:30:27.461313   11129 pod_ready.go:86] duration metric: took 386.072735ms for pod "kube-controller-manager-addons-193927" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:30:27.661665   11129 pod_ready.go:83] waiting for pod "kube-proxy-j2r54" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:30:28.061557   11129 pod_ready.go:94] pod "kube-proxy-j2r54" is "Ready"
	I1210 05:30:28.061580   11129 pod_ready.go:86] duration metric: took 399.891967ms for pod "kube-proxy-j2r54" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:30:28.262414   11129 pod_ready.go:83] waiting for pod "kube-scheduler-addons-193927" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:30:28.661214   11129 pod_ready.go:94] pod "kube-scheduler-addons-193927" is "Ready"
	I1210 05:30:28.661240   11129 pod_ready.go:86] duration metric: took 398.800238ms for pod "kube-scheduler-addons-193927" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:30:28.661251   11129 pod_ready.go:40] duration metric: took 1.602910266s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 05:30:28.704459   11129 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1210 05:30:28.705787   11129 out.go:179] * Done! kubectl is now configured to use "addons-193927" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 05:30:26 addons-193927 crio[767]: time="2025-12-10T05:30:26.696874763Z" level=info msg="Starting container: 5d4a1d5da42cdea143fe7688a27cc37ad2f4a146e885ca2f25810e17c009c709" id=8ae3adbb-9f1b-4102-90f4-560ab1c8ff56 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 05:30:26 addons-193927 crio[767]: time="2025-12-10T05:30:26.699348322Z" level=info msg="Started container" PID=7294 containerID=5d4a1d5da42cdea143fe7688a27cc37ad2f4a146e885ca2f25810e17c009c709 description=kube-system/csi-hostpathplugin-2wcqc/csi-snapshotter id=8ae3adbb-9f1b-4102-90f4-560ab1c8ff56 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b106c581aab453f517b0e75347e8143beeea3eaa88ca3d01b32a2998b0254cc3
	Dec 10 05:30:29 addons-193927 crio[767]: time="2025-12-10T05:30:29.519183432Z" level=info msg="Running pod sandbox: default/busybox/POD" id=916ff8c7-0a00-4da4-aa3f-564da3f44beb name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 05:30:29 addons-193927 crio[767]: time="2025-12-10T05:30:29.519245247Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 05:30:29 addons-193927 crio[767]: time="2025-12-10T05:30:29.524483941Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:507e1b547481c7d87fa8fa8435834f2d3bf98108534ba3c32e86e84865bb4759 UID:a64bd81b-5c5c-497a-80f3-8d129505228d NetNS:/var/run/netns/e0840518-fca7-44c9-91cb-722222123488 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0001386e0}] Aliases:map[]}"
	Dec 10 05:30:29 addons-193927 crio[767]: time="2025-12-10T05:30:29.524511737Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 10 05:30:29 addons-193927 crio[767]: time="2025-12-10T05:30:29.534122003Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:507e1b547481c7d87fa8fa8435834f2d3bf98108534ba3c32e86e84865bb4759 UID:a64bd81b-5c5c-497a-80f3-8d129505228d NetNS:/var/run/netns/e0840518-fca7-44c9-91cb-722222123488 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0001386e0}] Aliases:map[]}"
	Dec 10 05:30:29 addons-193927 crio[767]: time="2025-12-10T05:30:29.534241215Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 10 05:30:29 addons-193927 crio[767]: time="2025-12-10T05:30:29.534909396Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 05:30:29 addons-193927 crio[767]: time="2025-12-10T05:30:29.535667402Z" level=info msg="Ran pod sandbox 507e1b547481c7d87fa8fa8435834f2d3bf98108534ba3c32e86e84865bb4759 with infra container: default/busybox/POD" id=916ff8c7-0a00-4da4-aa3f-564da3f44beb name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 05:30:29 addons-193927 crio[767]: time="2025-12-10T05:30:29.536601206Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ffdce500-ee96-4836-a46f-aa6757144d1d name=/runtime.v1.ImageService/ImageStatus
	Dec 10 05:30:29 addons-193927 crio[767]: time="2025-12-10T05:30:29.536708634Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=ffdce500-ee96-4836-a46f-aa6757144d1d name=/runtime.v1.ImageService/ImageStatus
	Dec 10 05:30:29 addons-193927 crio[767]: time="2025-12-10T05:30:29.536741631Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=ffdce500-ee96-4836-a46f-aa6757144d1d name=/runtime.v1.ImageService/ImageStatus
	Dec 10 05:30:29 addons-193927 crio[767]: time="2025-12-10T05:30:29.537281408Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2740c14a-d596-4928-8122-84bef4f51ee0 name=/runtime.v1.ImageService/PullImage
	Dec 10 05:30:29 addons-193927 crio[767]: time="2025-12-10T05:30:29.538647724Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 10 05:30:30 addons-193927 crio[767]: time="2025-12-10T05:30:30.120569329Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=2740c14a-d596-4928-8122-84bef4f51ee0 name=/runtime.v1.ImageService/PullImage
	Dec 10 05:30:30 addons-193927 crio[767]: time="2025-12-10T05:30:30.121105979Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0f1f696f-7ae2-4bc8-9a2c-4d952c3bcd3e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 05:30:30 addons-193927 crio[767]: time="2025-12-10T05:30:30.122391798Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=43f6168f-81b1-488b-b396-a661334bbb16 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 05:30:30 addons-193927 crio[767]: time="2025-12-10T05:30:30.125671082Z" level=info msg="Creating container: default/busybox/busybox" id=f340c8c2-a723-428c-8a72-d1a4dde460a0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 05:30:30 addons-193927 crio[767]: time="2025-12-10T05:30:30.125797767Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 05:30:30 addons-193927 crio[767]: time="2025-12-10T05:30:30.130731522Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 05:30:30 addons-193927 crio[767]: time="2025-12-10T05:30:30.131168251Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 05:30:30 addons-193927 crio[767]: time="2025-12-10T05:30:30.165457605Z" level=info msg="Created container e4a1f95c783799157faa519c216290e271572b13334f07490720fb5bb45cead8: default/busybox/busybox" id=f340c8c2-a723-428c-8a72-d1a4dde460a0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 05:30:30 addons-193927 crio[767]: time="2025-12-10T05:30:30.165921685Z" level=info msg="Starting container: e4a1f95c783799157faa519c216290e271572b13334f07490720fb5bb45cead8" id=5151f67e-1237-4a20-bbc6-3875c928a316 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 05:30:30 addons-193927 crio[767]: time="2025-12-10T05:30:30.167561754Z" level=info msg="Started container" PID=7409 containerID=e4a1f95c783799157faa519c216290e271572b13334f07490720fb5bb45cead8 description=default/busybox/busybox id=5151f67e-1237-4a20-bbc6-3875c928a316 name=/runtime.v1.RuntimeService/StartContainer sandboxID=507e1b547481c7d87fa8fa8435834f2d3bf98108534ba3c32e86e84865bb4759
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	e4a1f95c78379       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          8 seconds ago        Running             busybox                                  0                   507e1b547481c       busybox                                     default
	5d4a1d5da42cd       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          12 seconds ago       Running             csi-snapshotter                          0                   b106c581aab45       csi-hostpathplugin-2wcqc                    kube-system
	cd1f99729cdad       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          13 seconds ago       Running             csi-provisioner                          0                   b106c581aab45       csi-hostpathplugin-2wcqc                    kube-system
	95ca228da5e9f       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            14 seconds ago       Running             liveness-probe                           0                   b106c581aab45       csi-hostpathplugin-2wcqc                    kube-system
	252c06e303732       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 15 seconds ago       Running             gcp-auth                                 0                   c914ddd9b1e0e       gcp-auth-78565c9fb4-4wmrx                   gcp-auth
	5b9cf05c0ab5e       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           16 seconds ago       Running             hostpath                                 0                   b106c581aab45       csi-hostpathplugin-2wcqc                    kube-system
	4717fea92c7df       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             17 seconds ago       Running             controller                               0                   f66ee1021a956       ingress-nginx-controller-85d4c799dd-h6x6q   ingress-nginx
	6c8ffe9e271a1       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                20 seconds ago       Running             node-driver-registrar                    0                   b106c581aab45       csi-hostpathplugin-2wcqc                    kube-system
	9484d473adb72       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            21 seconds ago       Running             gadget                                   0                   d8e061ef58dcd       gadget-b2r94                                gadget
	42c477d9d74b6       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              27 seconds ago       Running             registry-proxy                           0                   0c65a9b9191ac       registry-proxy-jr8xs                        kube-system
	884c3e97f2374       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   28 seconds ago       Exited              patch                                    0                   6ea00bcd8799f       gcp-auth-certs-patch-c22k6                  gcp-auth
	e4477630dcb98       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             28 seconds ago       Running             local-path-provisioner                   0                   758ac4c0239d8       local-path-provisioner-648f6765c9-nkqx4     local-path-storage
	db2f386b561a4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   29 seconds ago       Exited              create                                   0                   b9156bde00efc       gcp-auth-certs-create-fj6jz                 gcp-auth
	dc84fc8b0d7ae       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              29 seconds ago       Running             csi-resizer                              0                   7d92563ce8e11       csi-hostpath-resizer-0                      kube-system
	217c6052689f8       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      30 seconds ago       Running             volume-snapshot-controller               0                   d7c3f856bdec9       snapshot-controller-7d9fbc56b8-4ckl2        kube-system
	f02ac84563fd8       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     30 seconds ago       Running             nvidia-device-plugin-ctr                 0                   e6e6f4fe14aac       nvidia-device-plugin-daemonset-zdg7v        kube-system
	c2dd148c15de2       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     33 seconds ago       Running             amd-gpu-device-plugin                    0                   2187054c978b0       amd-gpu-device-plugin-742mx                 kube-system
	bd251ebea34ff       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      34 seconds ago       Running             volume-snapshot-controller               0                   66cbce320925a       snapshot-controller-7d9fbc56b8-v87tq        kube-system
	976a8b19e2a98       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   35 seconds ago       Running             csi-external-health-monitor-controller   0                   b106c581aab45       csi-hostpathplugin-2wcqc                    kube-system
	34f8957b9aa57       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   36 seconds ago       Exited              patch                                    0                   f41b818afd9e4       ingress-nginx-admission-patch-tc5th         ingress-nginx
	7a65eea81e573       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             36 seconds ago       Running             csi-attacher                             0                   556c98b0a1c3b       csi-hostpath-attacher-0                     kube-system
	852b508862e49       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   37 seconds ago       Exited              create                                   0                   a5445ea44de2a       ingress-nginx-admission-create-zw7mz        ingress-nginx
	d2d42e5524b3c       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              38 seconds ago       Running             yakd                                     0                   cfc23ee8bbafe       yakd-dashboard-5ff678cb9-7nd7x              yakd-dashboard
	05be8bf506f18       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           40 seconds ago       Running             registry                                 0                   27434b6a6ee66       registry-6b586f9694-h4d7x                   kube-system
	c2e8fc6eb52c0       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               42 seconds ago       Running             minikube-ingress-dns                     0                   da4e0a6e7190a       kube-ingress-dns-minikube                   kube-system
	395c860d8ecd8       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               47 seconds ago       Running             cloud-spanner-emulator                   0                   62ad030fae20a       cloud-spanner-emulator-5bdddb765-2jkx9      default
	cf1e8860d68b3       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        49 seconds ago       Running             metrics-server                           0                   c6d2f74cdc828       metrics-server-85b7d694d7-xswrz             kube-system
	3db45466cabf4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             50 seconds ago       Running             storage-provisioner                      0                   7558a11f8badc       storage-provisioner                         kube-system
	a56c2752b1ef9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             50 seconds ago       Running             coredns                                  0                   fa7ccbb7d155c       coredns-66bc5c9577-fk5gt                    kube-system
	367aea18176f0       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11                                           About a minute ago   Running             kindnet-cni                              0                   cf2391a3d70e2       kindnet-bbr2p                               kube-system
	206f9657e0226       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                                                             About a minute ago   Running             kube-proxy                               0                   20c7edea3ad19       kube-proxy-j2r54                            kube-system
	6501c9a3d5552       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                                                             About a minute ago   Running             kube-scheduler                           0                   8073fc2e65bff       kube-scheduler-addons-193927                kube-system
	0a2be4003b1b3       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                                                             About a minute ago   Running             kube-apiserver                           0                   e7539e94bcf35       kube-apiserver-addons-193927                kube-system
	3b5e4f42b79e9       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             About a minute ago   Running             etcd                                     0                   48483d0025ed7       etcd-addons-193927                          kube-system
	b2eb3db5b9910       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                                                             About a minute ago   Running             kube-controller-manager                  0                   f728cb3f57927       kube-controller-manager-addons-193927       kube-system
	
	
	==> coredns [a56c2752b1ef94dc626cb6a5ebe9da70da07ed988ba80bc8dbc476de7200232b] <==
	[INFO] 10.244.0.17:41086 - 6742 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000128895s
	[INFO] 10.244.0.17:42952 - 47737 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000088607s
	[INFO] 10.244.0.17:42952 - 47531 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000107394s
	[INFO] 10.244.0.17:52207 - 40674 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000083013s
	[INFO] 10.244.0.17:52207 - 40316 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000103385s
	[INFO] 10.244.0.17:43132 - 607 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000061496s
	[INFO] 10.244.0.17:43132 - 890 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000115589s
	[INFO] 10.244.0.17:55935 - 230 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000056393s
	[INFO] 10.244.0.17:55935 - 65494 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000071249s
	[INFO] 10.244.0.17:33280 - 29251 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000094161s
	[INFO] 10.244.0.17:33280 - 29475 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000136076s
	[INFO] 10.244.0.22:35825 - 3590 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00016616s
	[INFO] 10.244.0.22:48785 - 44469 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000167783s
	[INFO] 10.244.0.22:58048 - 41586 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000114021s
	[INFO] 10.244.0.22:58701 - 40548 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000148722s
	[INFO] 10.244.0.22:52240 - 28334 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000134584s
	[INFO] 10.244.0.22:51781 - 12457 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000129822s
	[INFO] 10.244.0.22:41225 - 55343 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.005268847s
	[INFO] 10.244.0.22:40262 - 1629 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.00631931s
	[INFO] 10.244.0.22:39588 - 29678 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005758401s
	[INFO] 10.244.0.22:39115 - 48800 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.013321955s
	[INFO] 10.244.0.22:52988 - 1128 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006107415s
	[INFO] 10.244.0.22:56820 - 3069 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006264135s
	[INFO] 10.244.0.22:57717 - 53764 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001839231s
	[INFO] 10.244.0.22:46593 - 8361 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002307643s
	
	
	==> describe nodes <==
	Name:               addons-193927
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-193927
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602
	                    minikube.k8s.io/name=addons-193927
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T05_29_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-193927
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-193927"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 05:29:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-193927
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 05:30:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 05:30:30 +0000   Wed, 10 Dec 2025 05:29:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 05:30:30 +0000   Wed, 10 Dec 2025 05:29:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 05:30:30 +0000   Wed, 10 Dec 2025 05:29:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 05:30:30 +0000   Wed, 10 Dec 2025 05:29:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-193927
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                62b96902-3e68-44a0-bbf4-5e77aa3a7b36
	  Boot ID:                    b1b789e7-29ca-41f0-9541-8c4ef16372aa
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     cloud-spanner-emulator-5bdddb765-2jkx9       0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  gadget                      gadget-b2r94                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  gcp-auth                    gcp-auth-78565c9fb4-4wmrx                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-h6x6q    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         63s
	  kube-system                 amd-gpu-device-plugin-742mx                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 coredns-66bc5c9577-fk5gt                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     64s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 csi-hostpathplugin-2wcqc                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 etcd-addons-193927                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         70s
	  kube-system                 kindnet-bbr2p                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      65s
	  kube-system                 kube-apiserver-addons-193927                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 kube-controller-manager-addons-193927        200m (2%)     0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kube-proxy-j2r54                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 kube-scheduler-addons-193927                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 metrics-server-85b7d694d7-xswrz              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         64s
	  kube-system                 nvidia-device-plugin-daemonset-zdg7v         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 registry-6b586f9694-h4d7x                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 registry-creds-764b6fb674-ghgkh              0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 registry-proxy-jr8xs                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 snapshot-controller-7d9fbc56b8-4ckl2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 snapshot-controller-7d9fbc56b8-v87tq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  local-path-storage          local-path-provisioner-648f6765c9-nkqx4      0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-7nd7x               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     64s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 63s   kube-proxy       
	  Normal  Starting                 70s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  70s   kubelet          Node addons-193927 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    70s   kubelet          Node addons-193927 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     70s   kubelet          Node addons-193927 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           65s   node-controller  Node addons-193927 event: Registered Node addons-193927 in Controller
	  Normal  NodeReady                52s   kubelet          Node addons-193927 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec10 05:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.000893] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.080009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.359935] i8042: Warning: Keylock active
	[  +0.011050] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.491371] block sda: the capability attribute has been deprecated.
	[  +0.085783] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023769] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.147072] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [3b5e4f42b79e944fc79e354eea5dfaeef38e9a172b426d0cd69186d52604413a] <==
	{"level":"warn","ts":"2025-12-10T05:29:26.543509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:29:26.550592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:29:26.557149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:29:26.563181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:29:26.569207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:29:26.590241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:29:26.593366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:29:26.605748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:29:26.646540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:29:34.153840Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.057184ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/addons-193927\" limit:1 ","response":"range_response_count:1 size:709"}
	{"level":"info","ts":"2025-12-10T05:29:34.153847Z","caller":"traceutil/trace.go:172","msg":"trace[2028098459] transaction","detail":"{read_only:false; response_revision:295; number_of_response:1; }","duration":"108.766038ms","start":"2025-12-10T05:29:34.045062Z","end":"2025-12-10T05:29:34.153828Z","steps":["trace[2028098459] 'process raft request'  (duration: 45.321423ms)","trace[2028098459] 'compare'  (duration: 63.363517ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T05:29:34.153929Z","caller":"traceutil/trace.go:172","msg":"trace[1316088184] range","detail":"{range_begin:/registry/csinodes/addons-193927; range_end:; response_count:1; response_revision:294; }","duration":"108.157458ms","start":"2025-12-10T05:29:34.045750Z","end":"2025-12-10T05:29:34.153907Z","steps":["trace[1316088184] 'agreement among raft nodes before linearized reading'  (duration: 44.603618ms)","trace[1316088184] 'range keys from in-memory index tree'  (duration: 63.387972ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T05:29:34.284789Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"103.198951ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/root-ca-cert-publisher\" limit:1 ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2025-12-10T05:29:34.284808Z","caller":"traceutil/trace.go:172","msg":"trace[714758576] transaction","detail":"{read_only:false; response_revision:298; number_of_response:1; }","duration":"128.633533ms","start":"2025-12-10T05:29:34.156163Z","end":"2025-12-10T05:29:34.284797Z","steps":["trace[714758576] 'process raft request'  (duration: 124.342694ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:29:34.284862Z","caller":"traceutil/trace.go:172","msg":"trace[1994471047] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/root-ca-cert-publisher; range_end:; response_count:1; response_revision:297; }","duration":"103.275718ms","start":"2025-12-10T05:29:34.181568Z","end":"2025-12-10T05:29:34.284844Z","steps":["trace[1994471047] 'agreement among raft nodes before linearized reading'  (duration: 98.944888ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T05:29:37.444573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:29:37.451244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:30:04.144234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:30:04.152641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:30:04.165585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:30:04.174024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37820","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-10T05:30:23.668673Z","caller":"traceutil/trace.go:172","msg":"trace[1363396482] linearizableReadLoop","detail":"{readStateIndex:1172; appliedIndex:1172; }","duration":"118.354145ms","start":"2025-12-10T05:30:23.550301Z","end":"2025-12-10T05:30:23.668655Z","steps":["trace[1363396482] 'read index received'  (duration: 118.349693ms)","trace[1363396482] 'applied index is now lower than readState.Index'  (duration: 3.851µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T05:30:23.668788Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.472916ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:30:23.668816Z","caller":"traceutil/trace.go:172","msg":"trace[1017580327] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1148; }","duration":"118.515715ms","start":"2025-12-10T05:30:23.550290Z","end":"2025-12-10T05:30:23.668806Z","steps":["trace[1017580327] 'agreement among raft nodes before linearized reading'  (duration: 118.44338ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:30:23.668890Z","caller":"traceutil/trace.go:172","msg":"trace[478720473] transaction","detail":"{read_only:false; response_revision:1149; number_of_response:1; }","duration":"139.362327ms","start":"2025-12-10T05:30:23.529514Z","end":"2025-12-10T05:30:23.668876Z","steps":["trace[478720473] 'process raft request'  (duration: 139.246147ms)"],"step_count":1}
	
	
	==> gcp-auth [252c06e30373275a7c74a3d73ac3e987b7218f6a70e8caf1bdb5e4bebdcd5a85] <==
	2025/12/10 05:30:23 GCP Auth Webhook started!
	2025/12/10 05:30:29 Ready to marshal response ...
	2025/12/10 05:30:29 Ready to write response ...
	2025/12/10 05:30:29 Ready to marshal response ...
	2025/12/10 05:30:29 Ready to write response ...
	2025/12/10 05:30:29 Ready to marshal response ...
	2025/12/10 05:30:29 Ready to write response ...
	
	
	==> kernel <==
	 05:30:39 up 13 min,  0 user,  load average: 2.04, 0.90, 0.34
	Linux addons-193927 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [367aea18176f031be6232fd30a314c767c7759fec05c5e3ffdaf569336ad6525] <==
	I1210 05:29:37.340938       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1210 05:29:37.341103       1 main.go:148] setting mtu 1500 for CNI 
	I1210 05:29:37.341121       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 05:29:37.341142       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T05:29:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 05:29:37.544776       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 05:29:37.544818       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 05:29:37.544840       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 05:29:37.545355       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 05:29:37.945135       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 05:29:37.945155       1 metrics.go:72] Registering metrics
	I1210 05:29:37.945204       1 controller.go:711] "Syncing nftables rules"
	I1210 05:29:47.546192       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 05:29:47.546281       1 main.go:301] handling current node
	I1210 05:29:57.544836       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 05:29:57.544890       1 main.go:301] handling current node
	I1210 05:30:07.545454       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 05:30:07.545483       1 main.go:301] handling current node
	I1210 05:30:17.548331       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 05:30:17.548365       1 main.go:301] handling current node
	I1210 05:30:27.545512       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 05:30:27.545558       1 main.go:301] handling current node
	I1210 05:30:37.545232       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 05:30:37.545275       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0a2be4003b1b30e3df7421633de714b1825d05f4ed06a10d8a16f03f12641dd3] <==
	 > logger="UnhandledError"
	E1210 05:30:01.571574       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.193.23:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.193.23:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.193.23:443: connect: connection refused" logger="UnhandledError"
	E1210 05:30:01.573036       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.193.23:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.193.23:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.193.23:443: connect: connection refused" logger="UnhandledError"
	W1210 05:30:02.572568       1 handler_proxy.go:99] no RequestInfo found in the context
	W1210 05:30:02.572575       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 05:30:02.572616       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1210 05:30:02.572632       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1210 05:30:02.572661       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1210 05:30:02.573770       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1210 05:30:04.144155       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1210 05:30:04.152567       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1210 05:30:04.165567       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1210 05:30:04.174003       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1210 05:30:06.588229       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.193.23:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.193.23:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	W1210 05:30:06.588859       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 05:30:06.590687       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1210 05:30:06.608260       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1210 05:30:37.343753       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:50878: use of closed network connection
	E1210 05:30:37.484150       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:50908: use of closed network connection
	
	
	==> kube-controller-manager [b2eb3db5b9910016ae4a73bcd8196a9aed9e2b0ea078772712f9b76865532a26] <==
	I1210 05:29:34.133035       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1210 05:29:34.133068       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-193927"
	I1210 05:29:34.133122       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1210 05:29:34.133161       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1210 05:29:34.133552       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1210 05:29:34.134048       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1210 05:29:34.134097       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1210 05:29:34.134181       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1210 05:29:34.134199       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1210 05:29:34.136854       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1210 05:29:34.136867       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1210 05:29:34.136925       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1210 05:29:34.136985       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1210 05:29:34.136994       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1210 05:29:34.136999       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1210 05:29:34.138061       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 05:29:34.218457       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-193927" podCIDRs=["10.244.0.0/24"]
	I1210 05:29:49.135029       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1210 05:30:04.135216       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 05:30:04.135997       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1210 05:30:04.136091       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1210 05:30:04.146454       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1210 05:30:04.150824       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1210 05:30:04.237425       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 05:30:04.251603       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [206f9657e022657209d8593c82dd3d5694e511c41253aa91adaa9064170bed8c] <==
	I1210 05:29:35.464303       1 server_linux.go:53] "Using iptables proxy"
	I1210 05:29:35.667727       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 05:29:35.768882       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 05:29:35.768936       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1210 05:29:35.769058       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 05:29:35.955483       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 05:29:35.955542       1 server_linux.go:132] "Using iptables Proxier"
	I1210 05:29:35.998075       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 05:29:36.009184       1 server.go:527] "Version info" version="v1.34.3"
	I1210 05:29:36.012244       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 05:29:36.035954       1 config.go:200] "Starting service config controller"
	I1210 05:29:36.057859       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 05:29:36.041489       1 config.go:106] "Starting endpoint slice config controller"
	I1210 05:29:36.057931       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 05:29:36.041498       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 05:29:36.057944       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 05:29:36.042015       1 config.go:309] "Starting node config controller"
	I1210 05:29:36.057968       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 05:29:36.057974       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 05:29:36.158635       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 05:29:36.158664       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 05:29:36.158678       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6501c9a3d5552292acb572b481eea754ae6f17f2913f63dc303d6291da022ed6] <==
	E1210 05:29:27.035860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 05:29:27.035882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 05:29:27.035918       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 05:29:27.035932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 05:29:27.035977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 05:29:27.035984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 05:29:27.036102       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 05:29:27.036109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 05:29:27.036132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 05:29:27.867863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 05:29:27.977152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 05:29:27.981940       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 05:29:27.991879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 05:29:28.050328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 05:29:28.144770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 05:29:28.168665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 05:29:28.171481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 05:29:28.196414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 05:29:28.209276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 05:29:28.215030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 05:29:28.225985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 05:29:28.359777       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1210 05:29:28.363663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 05:29:28.383623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1210 05:29:30.433634       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 05:30:09 addons-193927 kubelet[2362]: I1210 05:30:09.627861    2362 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-zdg7v" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 05:30:09 addons-193927 kubelet[2362]: I1210 05:30:09.636292    2362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpath-resizer-0" podStartSLOduration=12.900034831 podStartE2EDuration="33.636277005s" podCreationTimestamp="2025-12-10 05:29:36 +0000 UTC" firstStartedPulling="2025-12-10 05:29:48.219570376 +0000 UTC m=+18.829287627" lastFinishedPulling="2025-12-10 05:30:08.955812541 +0000 UTC m=+39.565529801" observedRunningTime="2025-12-10 05:30:09.635644142 +0000 UTC m=+40.245361416" watchObservedRunningTime="2025-12-10 05:30:09.636277005 +0000 UTC m=+40.245994271"
	Dec 10 05:30:10 addons-193927 kubelet[2362]: I1210 05:30:10.643213    2362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="local-path-storage/local-path-provisioner-648f6765c9-nkqx4" podStartSLOduration=12.690876842 podStartE2EDuration="34.643192629s" podCreationTimestamp="2025-12-10 05:29:36 +0000 UTC" firstStartedPulling="2025-12-10 05:29:48.223548231 +0000 UTC m=+18.833265482" lastFinishedPulling="2025-12-10 05:30:10.17586402 +0000 UTC m=+40.785581269" observedRunningTime="2025-12-10 05:30:10.642803018 +0000 UTC m=+41.252520285" watchObservedRunningTime="2025-12-10 05:30:10.643192629 +0000 UTC m=+41.252909896"
	Dec 10 05:30:10 addons-193927 kubelet[2362]: I1210 05:30:10.793594    2362 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nww6g\" (UniqueName: \"kubernetes.io/projected/3f6cb51a-b57f-41be-85b5-486831436e05-kube-api-access-nww6g\") pod \"3f6cb51a-b57f-41be-85b5-486831436e05\" (UID: \"3f6cb51a-b57f-41be-85b5-486831436e05\") "
	Dec 10 05:30:10 addons-193927 kubelet[2362]: I1210 05:30:10.795712    2362 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f6cb51a-b57f-41be-85b5-486831436e05-kube-api-access-nww6g" (OuterVolumeSpecName: "kube-api-access-nww6g") pod "3f6cb51a-b57f-41be-85b5-486831436e05" (UID: "3f6cb51a-b57f-41be-85b5-486831436e05"). InnerVolumeSpecName "kube-api-access-nww6g". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 10 05:30:10 addons-193927 kubelet[2362]: I1210 05:30:10.894177    2362 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nww6g\" (UniqueName: \"kubernetes.io/projected/3f6cb51a-b57f-41be-85b5-486831436e05-kube-api-access-nww6g\") on node \"addons-193927\" DevicePath \"\""
	Dec 10 05:30:11 addons-193927 kubelet[2362]: I1210 05:30:11.638491    2362 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9156bde00efc8501252a4e8b569ff73e33d5e2b49dd4f1e026e3906c61eed44"
	Dec 10 05:30:11 addons-193927 kubelet[2362]: I1210 05:30:11.902976    2362 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj6jd\" (UniqueName: \"kubernetes.io/projected/80e4134e-b7de-4f53-87c5-dcb582aaf035-kube-api-access-vj6jd\") pod \"80e4134e-b7de-4f53-87c5-dcb582aaf035\" (UID: \"80e4134e-b7de-4f53-87c5-dcb582aaf035\") "
	Dec 10 05:30:11 addons-193927 kubelet[2362]: I1210 05:30:11.904918    2362 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80e4134e-b7de-4f53-87c5-dcb582aaf035-kube-api-access-vj6jd" (OuterVolumeSpecName: "kube-api-access-vj6jd") pod "80e4134e-b7de-4f53-87c5-dcb582aaf035" (UID: "80e4134e-b7de-4f53-87c5-dcb582aaf035"). InnerVolumeSpecName "kube-api-access-vj6jd". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 10 05:30:12 addons-193927 kubelet[2362]: I1210 05:30:12.003735    2362 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vj6jd\" (UniqueName: \"kubernetes.io/projected/80e4134e-b7de-4f53-87c5-dcb582aaf035-kube-api-access-vj6jd\") on node \"addons-193927\" DevicePath \"\""
	Dec 10 05:30:12 addons-193927 kubelet[2362]: I1210 05:30:12.643102    2362 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ea00bcd8799fc10bdaf14ff6246d555d1e298369afbdec23364bd5808ea50f6"
	Dec 10 05:30:12 addons-193927 kubelet[2362]: I1210 05:30:12.645022    2362 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-jr8xs" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 05:30:12 addons-193927 kubelet[2362]: I1210 05:30:12.655675    2362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-jr8xs" podStartSLOduration=2.152832942 podStartE2EDuration="25.655651771s" podCreationTimestamp="2025-12-10 05:29:47 +0000 UTC" firstStartedPulling="2025-12-10 05:29:48.309239617 +0000 UTC m=+18.918956863" lastFinishedPulling="2025-12-10 05:30:11.812058438 +0000 UTC m=+42.421775692" observedRunningTime="2025-12-10 05:30:12.655598815 +0000 UTC m=+43.265316082" watchObservedRunningTime="2025-12-10 05:30:12.655651771 +0000 UTC m=+43.265369038"
	Dec 10 05:30:13 addons-193927 kubelet[2362]: I1210 05:30:13.647276    2362 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-jr8xs" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 05:30:19 addons-193927 kubelet[2362]: E1210 05:30:19.657816    2362 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 10 05:30:19 addons-193927 kubelet[2362]: E1210 05:30:19.657911    2362 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9369e244-75eb-4b63-883e-0cb1e1d332eb-gcr-creds podName:9369e244-75eb-4b63-883e-0cb1e1d332eb nodeName:}" failed. No retries permitted until 2025-12-10 05:30:51.657890712 +0000 UTC m=+82.267607959 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/9369e244-75eb-4b63-883e-0cb1e1d332eb-gcr-creds") pod "registry-creds-764b6fb674-ghgkh" (UID: "9369e244-75eb-4b63-883e-0cb1e1d332eb") : secret "registry-creds-gcr" not found
	Dec 10 05:30:21 addons-193927 kubelet[2362]: I1210 05:30:21.689515    2362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-b2r94" podStartSLOduration=20.371243825 podStartE2EDuration="45.689495774s" podCreationTimestamp="2025-12-10 05:29:36 +0000 UTC" firstStartedPulling="2025-12-10 05:29:52.086537748 +0000 UTC m=+22.696254994" lastFinishedPulling="2025-12-10 05:30:17.404789698 +0000 UTC m=+48.014506943" observedRunningTime="2025-12-10 05:30:17.678705448 +0000 UTC m=+48.288422715" watchObservedRunningTime="2025-12-10 05:30:21.689495774 +0000 UTC m=+52.299213044"
	Dec 10 05:30:21 addons-193927 kubelet[2362]: I1210 05:30:21.690687    2362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-85d4c799dd-h6x6q" podStartSLOduration=27.952833188 podStartE2EDuration="45.690672913s" podCreationTimestamp="2025-12-10 05:29:36 +0000 UTC" firstStartedPulling="2025-12-10 05:30:03.752836874 +0000 UTC m=+34.362554133" lastFinishedPulling="2025-12-10 05:30:21.490676613 +0000 UTC m=+52.100393858" observedRunningTime="2025-12-10 05:30:21.688803276 +0000 UTC m=+52.298520557" watchObservedRunningTime="2025-12-10 05:30:21.690672913 +0000 UTC m=+52.300390178"
	Dec 10 05:30:23 addons-193927 kubelet[2362]: I1210 05:30:23.521876    2362 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 10 05:30:23 addons-193927 kubelet[2362]: I1210 05:30:23.521923    2362 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 10 05:30:24 addons-193927 kubelet[2362]: I1210 05:30:24.707306    2362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-4wmrx" podStartSLOduration=40.355960878 podStartE2EDuration="42.707275182s" podCreationTimestamp="2025-12-10 05:29:42 +0000 UTC" firstStartedPulling="2025-12-10 05:30:21.416370288 +0000 UTC m=+52.026087539" lastFinishedPulling="2025-12-10 05:30:23.767684595 +0000 UTC m=+54.377401843" observedRunningTime="2025-12-10 05:30:24.706371461 +0000 UTC m=+55.316088747" watchObservedRunningTime="2025-12-10 05:30:24.707275182 +0000 UTC m=+55.316992449"
	Dec 10 05:30:26 addons-193927 kubelet[2362]: I1210 05:30:26.728487    2362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-2wcqc" podStartSLOduration=1.275638557 podStartE2EDuration="39.728467572s" podCreationTimestamp="2025-12-10 05:29:47 +0000 UTC" firstStartedPulling="2025-12-10 05:29:48.205476685 +0000 UTC m=+18.815193936" lastFinishedPulling="2025-12-10 05:30:26.658305689 +0000 UTC m=+57.268022951" observedRunningTime="2025-12-10 05:30:26.727667228 +0000 UTC m=+57.337384522" watchObservedRunningTime="2025-12-10 05:30:26.728467572 +0000 UTC m=+57.338184839"
	Dec 10 05:30:29 addons-193927 kubelet[2362]: I1210 05:30:29.235312    2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tc4h\" (UniqueName: \"kubernetes.io/projected/a64bd81b-5c5c-497a-80f3-8d129505228d-kube-api-access-5tc4h\") pod \"busybox\" (UID: \"a64bd81b-5c5c-497a-80f3-8d129505228d\") " pod="default/busybox"
	Dec 10 05:30:29 addons-193927 kubelet[2362]: I1210 05:30:29.235380    2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a64bd81b-5c5c-497a-80f3-8d129505228d-gcp-creds\") pod \"busybox\" (UID: \"a64bd81b-5c5c-497a-80f3-8d129505228d\") " pod="default/busybox"
	Dec 10 05:30:30 addons-193927 kubelet[2362]: I1210 05:30:30.747700    2362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.162916905 podStartE2EDuration="1.74768325s" podCreationTimestamp="2025-12-10 05:30:29 +0000 UTC" firstStartedPulling="2025-12-10 05:30:29.53697034 +0000 UTC m=+60.146687586" lastFinishedPulling="2025-12-10 05:30:30.121736668 +0000 UTC m=+60.731453931" observedRunningTime="2025-12-10 05:30:30.746516716 +0000 UTC m=+61.356233993" watchObservedRunningTime="2025-12-10 05:30:30.74768325 +0000 UTC m=+61.357400516"
	
	
	==> storage-provisioner [3db45466cabf41054b120f3c6070f1ec70a8b2841948afaab355e73b36c7f163] <==
	W1210 05:30:14.370591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:30:16.372818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:30:16.376362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:30:18.380399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:30:18.386814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:30:20.389699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:30:20.394692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:30:22.397531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:30:22.400618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:30:24.402477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:30:24.405705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:30:26.409023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:30:26.412980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:30:28.415559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:30:28.418732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:30:30.420926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:30:30.423872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:30:32.426322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:30:32.431191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:30:34.433472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:30:34.436859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:30:36.439484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:30:36.443839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:30:38.447217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:30:38.451718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-193927 -n addons-193927
helpers_test.go:270: (dbg) Run:  kubectl --context addons-193927 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: gcp-auth-certs-create-fj6jz gcp-auth-certs-patch-c22k6 ingress-nginx-admission-create-zw7mz ingress-nginx-admission-patch-tc5th registry-creds-764b6fb674-ghgkh
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-193927 describe pod gcp-auth-certs-create-fj6jz gcp-auth-certs-patch-c22k6 ingress-nginx-admission-create-zw7mz ingress-nginx-admission-patch-tc5th registry-creds-764b6fb674-ghgkh
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-193927 describe pod gcp-auth-certs-create-fj6jz gcp-auth-certs-patch-c22k6 ingress-nginx-admission-create-zw7mz ingress-nginx-admission-patch-tc5th registry-creds-764b6fb674-ghgkh: exit status 1 (56.79071ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-create-fj6jz" not found
	Error from server (NotFound): pods "gcp-auth-certs-patch-c22k6" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-zw7mz" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-tc5th" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-ghgkh" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-193927 describe pod gcp-auth-certs-create-fj6jz gcp-auth-certs-patch-c22k6 ingress-nginx-admission-create-zw7mz ingress-nginx-admission-patch-tc5th registry-creds-764b6fb674-ghgkh: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-193927 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-193927 addons disable headlamp --alsologtostderr -v=1: exit status 11 (230.82094ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:30:39.970552   21202 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:30:39.970828   21202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:30:39.970837   21202 out.go:374] Setting ErrFile to fd 2...
	I1210 05:30:39.970841   21202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:30:39.970989   21202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 05:30:39.971222   21202 mustload.go:66] Loading cluster: addons-193927
	I1210 05:30:39.971511   21202 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:30:39.971534   21202 addons.go:622] checking whether the cluster is paused
	I1210 05:30:39.971625   21202 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:30:39.971637   21202 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:30:39.971975   21202 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:30:39.989006   21202 ssh_runner.go:195] Run: systemctl --version
	I1210 05:30:39.989044   21202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:30:40.005187   21202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:30:40.098153   21202 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:30:40.098233   21202 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:30:40.125653   21202 cri.go:89] found id: "5d4a1d5da42cdea143fe7688a27cc37ad2f4a146e885ca2f25810e17c009c709"
	I1210 05:30:40.125674   21202 cri.go:89] found id: "cd1f99729cdad01237d94da575e9488a1f060060c59b7858ae362146b66a5f07"
	I1210 05:30:40.125680   21202 cri.go:89] found id: "95ca228da5e9f4d7e909834091b594c45c7208f8d3b2a571abd619c956f77482"
	I1210 05:30:40.125684   21202 cri.go:89] found id: "5b9cf05c0ab5e38bbe9cfe2273f65fd13711e93896f218f41f79c0660b03dc90"
	I1210 05:30:40.125688   21202 cri.go:89] found id: "6c8ffe9e271a1d0db9a74167b5f966efa0dfe72c5e1662e507abbd8e9663fab6"
	I1210 05:30:40.125692   21202 cri.go:89] found id: "42c477d9d74b6c98a8cb5af1e1f7e3db2b09e988a7fae2733bc43265b154797e"
	I1210 05:30:40.125697   21202 cri.go:89] found id: "dc84fc8b0d7ae154e64ef5052f253bba4217a7a5e867c4712f16ca97cf539e99"
	I1210 05:30:40.125701   21202 cri.go:89] found id: "217c6052689f8c587e315acc25a1b2849ce25e9b39451148233d1f6aa28f814e"
	I1210 05:30:40.125706   21202 cri.go:89] found id: "f02ac84563fd8d04d4258d15032b4be710d23f174fe6977d0c77a2b2231ceb66"
	I1210 05:30:40.125715   21202 cri.go:89] found id: "c2dd148c15de2f4ce8a5067f1648f58cbe34599d18b462157fbe53d635a2ae2d"
	I1210 05:30:40.125724   21202 cri.go:89] found id: "bd251ebea34ff80ac352d3659aca4e9dd92516b5b29e42918a88320e6d6c00a0"
	I1210 05:30:40.125730   21202 cri.go:89] found id: "976a8b19e2a981db8eb4cccab7c5e66c6de34da6ca5d67769e3041ff93464bb0"
	I1210 05:30:40.125737   21202 cri.go:89] found id: "7a65eea81e573477a1e4b111a57afc5d01badf2c22b3244ab34f401df736478b"
	I1210 05:30:40.125743   21202 cri.go:89] found id: "05be8bf506f18516a5e7ba92ec9ee9f1ddb3e678cbc2fbd6fa67ed3d79c01d6f"
	I1210 05:30:40.125751   21202 cri.go:89] found id: "c2e8fc6eb52c03a13e3410eba38a1f93510543ca9cc1f2dce8cf44f724ebb51e"
	I1210 05:30:40.125763   21202 cri.go:89] found id: "cf1e8860d68b3fed3b954f03825d2e52dc0a76a1d91f34d013990bee525f9ba1"
	I1210 05:30:40.125772   21202 cri.go:89] found id: "3db45466cabf41054b120f3c6070f1ec70a8b2841948afaab355e73b36c7f163"
	I1210 05:30:40.125778   21202 cri.go:89] found id: "a56c2752b1ef94dc626cb6a5ebe9da70da07ed988ba80bc8dbc476de7200232b"
	I1210 05:30:40.125781   21202 cri.go:89] found id: "367aea18176f031be6232fd30a314c767c7759fec05c5e3ffdaf569336ad6525"
	I1210 05:30:40.125785   21202 cri.go:89] found id: "206f9657e022657209d8593c82dd3d5694e511c41253aa91adaa9064170bed8c"
	I1210 05:30:40.125791   21202 cri.go:89] found id: "6501c9a3d5552292acb572b481eea754ae6f17f2913f63dc303d6291da022ed6"
	I1210 05:30:40.125799   21202 cri.go:89] found id: "0a2be4003b1b30e3df7421633de714b1825d05f4ed06a10d8a16f03f12641dd3"
	I1210 05:30:40.125804   21202 cri.go:89] found id: "3b5e4f42b79e944fc79e354eea5dfaeef38e9a172b426d0cd69186d52604413a"
	I1210 05:30:40.125812   21202 cri.go:89] found id: "b2eb3db5b9910016ae4a73bcd8196a9aed9e2b0ea078772712f9b76865532a26"
	I1210 05:30:40.125817   21202 cri.go:89] found id: ""
	I1210 05:30:40.125863   21202 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 05:30:40.138839   21202 out.go:203] 
	W1210 05:30:40.139942   21202 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:30:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:30:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 05:30:40.139962   21202 out.go:285] * 
	* 
	W1210 05:30:40.143195   21202 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:30:40.144386   21202 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-193927 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.43s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-2jkx9" [80dea52b-ac53-4e4a-9bb4-39a4e7e0ead1] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002123157s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-193927 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-193927 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (234.522451ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:30:48.040002   21723 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:30:48.040146   21723 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:30:48.040155   21723 out.go:374] Setting ErrFile to fd 2...
	I1210 05:30:48.040159   21723 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:30:48.040335   21723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 05:30:48.040571   21723 mustload.go:66] Loading cluster: addons-193927
	I1210 05:30:48.040867   21723 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:30:48.040885   21723 addons.go:622] checking whether the cluster is paused
	I1210 05:30:48.040962   21723 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:30:48.040973   21723 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:30:48.041318   21723 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:30:48.057997   21723 ssh_runner.go:195] Run: systemctl --version
	I1210 05:30:48.058048   21723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:30:48.074549   21723 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:30:48.168623   21723 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:30:48.168701   21723 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:30:48.199232   21723 cri.go:89] found id: "5d4a1d5da42cdea143fe7688a27cc37ad2f4a146e885ca2f25810e17c009c709"
	I1210 05:30:48.199265   21723 cri.go:89] found id: "cd1f99729cdad01237d94da575e9488a1f060060c59b7858ae362146b66a5f07"
	I1210 05:30:48.199269   21723 cri.go:89] found id: "95ca228da5e9f4d7e909834091b594c45c7208f8d3b2a571abd619c956f77482"
	I1210 05:30:48.199272   21723 cri.go:89] found id: "5b9cf05c0ab5e38bbe9cfe2273f65fd13711e93896f218f41f79c0660b03dc90"
	I1210 05:30:48.199275   21723 cri.go:89] found id: "6c8ffe9e271a1d0db9a74167b5f966efa0dfe72c5e1662e507abbd8e9663fab6"
	I1210 05:30:48.199283   21723 cri.go:89] found id: "42c477d9d74b6c98a8cb5af1e1f7e3db2b09e988a7fae2733bc43265b154797e"
	I1210 05:30:48.199286   21723 cri.go:89] found id: "dc84fc8b0d7ae154e64ef5052f253bba4217a7a5e867c4712f16ca97cf539e99"
	I1210 05:30:48.199289   21723 cri.go:89] found id: "217c6052689f8c587e315acc25a1b2849ce25e9b39451148233d1f6aa28f814e"
	I1210 05:30:48.199292   21723 cri.go:89] found id: "f02ac84563fd8d04d4258d15032b4be710d23f174fe6977d0c77a2b2231ceb66"
	I1210 05:30:48.199301   21723 cri.go:89] found id: "c2dd148c15de2f4ce8a5067f1648f58cbe34599d18b462157fbe53d635a2ae2d"
	I1210 05:30:48.199305   21723 cri.go:89] found id: "bd251ebea34ff80ac352d3659aca4e9dd92516b5b29e42918a88320e6d6c00a0"
	I1210 05:30:48.199308   21723 cri.go:89] found id: "976a8b19e2a981db8eb4cccab7c5e66c6de34da6ca5d67769e3041ff93464bb0"
	I1210 05:30:48.199310   21723 cri.go:89] found id: "7a65eea81e573477a1e4b111a57afc5d01badf2c22b3244ab34f401df736478b"
	I1210 05:30:48.199314   21723 cri.go:89] found id: "05be8bf506f18516a5e7ba92ec9ee9f1ddb3e678cbc2fbd6fa67ed3d79c01d6f"
	I1210 05:30:48.199316   21723 cri.go:89] found id: "c2e8fc6eb52c03a13e3410eba38a1f93510543ca9cc1f2dce8cf44f724ebb51e"
	I1210 05:30:48.199323   21723 cri.go:89] found id: "cf1e8860d68b3fed3b954f03825d2e52dc0a76a1d91f34d013990bee525f9ba1"
	I1210 05:30:48.199328   21723 cri.go:89] found id: "3db45466cabf41054b120f3c6070f1ec70a8b2841948afaab355e73b36c7f163"
	I1210 05:30:48.199332   21723 cri.go:89] found id: "a56c2752b1ef94dc626cb6a5ebe9da70da07ed988ba80bc8dbc476de7200232b"
	I1210 05:30:48.199335   21723 cri.go:89] found id: "367aea18176f031be6232fd30a314c767c7759fec05c5e3ffdaf569336ad6525"
	I1210 05:30:48.199337   21723 cri.go:89] found id: "206f9657e022657209d8593c82dd3d5694e511c41253aa91adaa9064170bed8c"
	I1210 05:30:48.199343   21723 cri.go:89] found id: "6501c9a3d5552292acb572b481eea754ae6f17f2913f63dc303d6291da022ed6"
	I1210 05:30:48.199346   21723 cri.go:89] found id: "0a2be4003b1b30e3df7421633de714b1825d05f4ed06a10d8a16f03f12641dd3"
	I1210 05:30:48.199348   21723 cri.go:89] found id: "3b5e4f42b79e944fc79e354eea5dfaeef38e9a172b426d0cd69186d52604413a"
	I1210 05:30:48.199351   21723 cri.go:89] found id: "b2eb3db5b9910016ae4a73bcd8196a9aed9e2b0ea078772712f9b76865532a26"
	I1210 05:30:48.199354   21723 cri.go:89] found id: ""
	I1210 05:30:48.199400   21723 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 05:30:48.212906   21723 out.go:203] 
	W1210 05:30:48.214070   21723 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:30:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:30:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 05:30:48.214109   21723 out.go:285] * 
	* 
	W1210 05:30:48.217007   21723 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:30:48.218202   21723 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-193927 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (11.11s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-193927 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-193927 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-193927 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-193927 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-193927 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-193927 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-193927 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-193927 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [6cce10a9-5a78-4758-92f8-259f6f73c447] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [6cce10a9-5a78-4758-92f8-259f6f73c447] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [6cce10a9-5a78-4758-92f8-259f6f73c447] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.002470667s
addons_test.go:969: (dbg) Run:  kubectl --context addons-193927 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-193927 ssh "cat /opt/local-path-provisioner/pvc-474dcd7d-97ea-4af6-9477-35c3227c923f_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-193927 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-193927 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-193927 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-193927 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (236.874878ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:30:56.375729   23228 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:30:56.376027   23228 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:30:56.376040   23228 out.go:374] Setting ErrFile to fd 2...
	I1210 05:30:56.376046   23228 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:30:56.376262   23228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 05:30:56.376525   23228 mustload.go:66] Loading cluster: addons-193927
	I1210 05:30:56.376841   23228 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:30:56.376864   23228 addons.go:622] checking whether the cluster is paused
	I1210 05:30:56.376954   23228 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:30:56.376970   23228 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:30:56.377381   23228 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:30:56.394413   23228 ssh_runner.go:195] Run: systemctl --version
	I1210 05:30:56.394472   23228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:30:56.410673   23228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:30:56.504276   23228 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:30:56.504397   23228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:30:56.532312   23228 cri.go:89] found id: "5d4a1d5da42cdea143fe7688a27cc37ad2f4a146e885ca2f25810e17c009c709"
	I1210 05:30:56.532337   23228 cri.go:89] found id: "cd1f99729cdad01237d94da575e9488a1f060060c59b7858ae362146b66a5f07"
	I1210 05:30:56.532347   23228 cri.go:89] found id: "95ca228da5e9f4d7e909834091b594c45c7208f8d3b2a571abd619c956f77482"
	I1210 05:30:56.532351   23228 cri.go:89] found id: "5b9cf05c0ab5e38bbe9cfe2273f65fd13711e93896f218f41f79c0660b03dc90"
	I1210 05:30:56.532354   23228 cri.go:89] found id: "6c8ffe9e271a1d0db9a74167b5f966efa0dfe72c5e1662e507abbd8e9663fab6"
	I1210 05:30:56.532357   23228 cri.go:89] found id: "42c477d9d74b6c98a8cb5af1e1f7e3db2b09e988a7fae2733bc43265b154797e"
	I1210 05:30:56.532360   23228 cri.go:89] found id: "dc84fc8b0d7ae154e64ef5052f253bba4217a7a5e867c4712f16ca97cf539e99"
	I1210 05:30:56.532363   23228 cri.go:89] found id: "217c6052689f8c587e315acc25a1b2849ce25e9b39451148233d1f6aa28f814e"
	I1210 05:30:56.532366   23228 cri.go:89] found id: "f02ac84563fd8d04d4258d15032b4be710d23f174fe6977d0c77a2b2231ceb66"
	I1210 05:30:56.532371   23228 cri.go:89] found id: "c2dd148c15de2f4ce8a5067f1648f58cbe34599d18b462157fbe53d635a2ae2d"
	I1210 05:30:56.532375   23228 cri.go:89] found id: "bd251ebea34ff80ac352d3659aca4e9dd92516b5b29e42918a88320e6d6c00a0"
	I1210 05:30:56.532377   23228 cri.go:89] found id: "976a8b19e2a981db8eb4cccab7c5e66c6de34da6ca5d67769e3041ff93464bb0"
	I1210 05:30:56.532380   23228 cri.go:89] found id: "7a65eea81e573477a1e4b111a57afc5d01badf2c22b3244ab34f401df736478b"
	I1210 05:30:56.532383   23228 cri.go:89] found id: "05be8bf506f18516a5e7ba92ec9ee9f1ddb3e678cbc2fbd6fa67ed3d79c01d6f"
	I1210 05:30:56.532392   23228 cri.go:89] found id: "c2e8fc6eb52c03a13e3410eba38a1f93510543ca9cc1f2dce8cf44f724ebb51e"
	I1210 05:30:56.532407   23228 cri.go:89] found id: "cf1e8860d68b3fed3b954f03825d2e52dc0a76a1d91f34d013990bee525f9ba1"
	I1210 05:30:56.532415   23228 cri.go:89] found id: "3db45466cabf41054b120f3c6070f1ec70a8b2841948afaab355e73b36c7f163"
	I1210 05:30:56.532418   23228 cri.go:89] found id: "a56c2752b1ef94dc626cb6a5ebe9da70da07ed988ba80bc8dbc476de7200232b"
	I1210 05:30:56.532421   23228 cri.go:89] found id: "367aea18176f031be6232fd30a314c767c7759fec05c5e3ffdaf569336ad6525"
	I1210 05:30:56.532424   23228 cri.go:89] found id: "206f9657e022657209d8593c82dd3d5694e511c41253aa91adaa9064170bed8c"
	I1210 05:30:56.532433   23228 cri.go:89] found id: "6501c9a3d5552292acb572b481eea754ae6f17f2913f63dc303d6291da022ed6"
	I1210 05:30:56.532436   23228 cri.go:89] found id: "0a2be4003b1b30e3df7421633de714b1825d05f4ed06a10d8a16f03f12641dd3"
	I1210 05:30:56.532439   23228 cri.go:89] found id: "3b5e4f42b79e944fc79e354eea5dfaeef38e9a172b426d0cd69186d52604413a"
	I1210 05:30:56.532441   23228 cri.go:89] found id: "b2eb3db5b9910016ae4a73bcd8196a9aed9e2b0ea078772712f9b76865532a26"
	I1210 05:30:56.532444   23228 cri.go:89] found id: ""
	I1210 05:30:56.532480   23228 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 05:30:56.545912   23228 out.go:203] 
	W1210 05:30:56.547247   23228 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:30:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:30:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 05:30:56.547276   23228 out.go:285] * 
	* 
	W1210 05:30:56.550941   23228 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:30:56.552662   23228 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-193927 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (11.11s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-zdg7v" [c81ec6a5-eddd-4f24-b3f6-22fedc2f79b1] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003323365s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-193927 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-193927 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (247.344615ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:30:42.789576   21308 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:30:42.789728   21308 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:30:42.789741   21308 out.go:374] Setting ErrFile to fd 2...
	I1210 05:30:42.789748   21308 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:30:42.789952   21308 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 05:30:42.790245   21308 mustload.go:66] Loading cluster: addons-193927
	I1210 05:30:42.790587   21308 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:30:42.790609   21308 addons.go:622] checking whether the cluster is paused
	I1210 05:30:42.790740   21308 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:30:42.790755   21308 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:30:42.791149   21308 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:30:42.808798   21308 ssh_runner.go:195] Run: systemctl --version
	I1210 05:30:42.808846   21308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:30:42.826964   21308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:30:42.920213   21308 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:30:42.920281   21308 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:30:42.950071   21308 cri.go:89] found id: "5d4a1d5da42cdea143fe7688a27cc37ad2f4a146e885ca2f25810e17c009c709"
	I1210 05:30:42.950115   21308 cri.go:89] found id: "cd1f99729cdad01237d94da575e9488a1f060060c59b7858ae362146b66a5f07"
	I1210 05:30:42.950121   21308 cri.go:89] found id: "95ca228da5e9f4d7e909834091b594c45c7208f8d3b2a571abd619c956f77482"
	I1210 05:30:42.950124   21308 cri.go:89] found id: "5b9cf05c0ab5e38bbe9cfe2273f65fd13711e93896f218f41f79c0660b03dc90"
	I1210 05:30:42.950134   21308 cri.go:89] found id: "6c8ffe9e271a1d0db9a74167b5f966efa0dfe72c5e1662e507abbd8e9663fab6"
	I1210 05:30:42.950137   21308 cri.go:89] found id: "42c477d9d74b6c98a8cb5af1e1f7e3db2b09e988a7fae2733bc43265b154797e"
	I1210 05:30:42.950140   21308 cri.go:89] found id: "dc84fc8b0d7ae154e64ef5052f253bba4217a7a5e867c4712f16ca97cf539e99"
	I1210 05:30:42.950143   21308 cri.go:89] found id: "217c6052689f8c587e315acc25a1b2849ce25e9b39451148233d1f6aa28f814e"
	I1210 05:30:42.950146   21308 cri.go:89] found id: "f02ac84563fd8d04d4258d15032b4be710d23f174fe6977d0c77a2b2231ceb66"
	I1210 05:30:42.950152   21308 cri.go:89] found id: "c2dd148c15de2f4ce8a5067f1648f58cbe34599d18b462157fbe53d635a2ae2d"
	I1210 05:30:42.950158   21308 cri.go:89] found id: "bd251ebea34ff80ac352d3659aca4e9dd92516b5b29e42918a88320e6d6c00a0"
	I1210 05:30:42.950160   21308 cri.go:89] found id: "976a8b19e2a981db8eb4cccab7c5e66c6de34da6ca5d67769e3041ff93464bb0"
	I1210 05:30:42.950164   21308 cri.go:89] found id: "7a65eea81e573477a1e4b111a57afc5d01badf2c22b3244ab34f401df736478b"
	I1210 05:30:42.950166   21308 cri.go:89] found id: "05be8bf506f18516a5e7ba92ec9ee9f1ddb3e678cbc2fbd6fa67ed3d79c01d6f"
	I1210 05:30:42.950169   21308 cri.go:89] found id: "c2e8fc6eb52c03a13e3410eba38a1f93510543ca9cc1f2dce8cf44f724ebb51e"
	I1210 05:30:42.950174   21308 cri.go:89] found id: "cf1e8860d68b3fed3b954f03825d2e52dc0a76a1d91f34d013990bee525f9ba1"
	I1210 05:30:42.950177   21308 cri.go:89] found id: "3db45466cabf41054b120f3c6070f1ec70a8b2841948afaab355e73b36c7f163"
	I1210 05:30:42.950181   21308 cri.go:89] found id: "a56c2752b1ef94dc626cb6a5ebe9da70da07ed988ba80bc8dbc476de7200232b"
	I1210 05:30:42.950183   21308 cri.go:89] found id: "367aea18176f031be6232fd30a314c767c7759fec05c5e3ffdaf569336ad6525"
	I1210 05:30:42.950185   21308 cri.go:89] found id: "206f9657e022657209d8593c82dd3d5694e511c41253aa91adaa9064170bed8c"
	I1210 05:30:42.950190   21308 cri.go:89] found id: "6501c9a3d5552292acb572b481eea754ae6f17f2913f63dc303d6291da022ed6"
	I1210 05:30:42.950193   21308 cri.go:89] found id: "0a2be4003b1b30e3df7421633de714b1825d05f4ed06a10d8a16f03f12641dd3"
	I1210 05:30:42.950196   21308 cri.go:89] found id: "3b5e4f42b79e944fc79e354eea5dfaeef38e9a172b426d0cd69186d52604413a"
	I1210 05:30:42.950199   21308 cri.go:89] found id: "b2eb3db5b9910016ae4a73bcd8196a9aed9e2b0ea078772712f9b76865532a26"
	I1210 05:30:42.950201   21308 cri.go:89] found id: ""
	I1210 05:30:42.950237   21308 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 05:30:42.964637   21308 out.go:203] 
	W1210 05:30:42.966144   21308 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:30:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:30:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 05:30:42.966162   21308 out.go:285] * 
	* 
	W1210 05:30:42.969008   21308 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:30:42.970146   21308 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-193927 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-7nd7x" [04353e3c-1b39-4df1-81cd-fb239b632a8d] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003256028s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-193927 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-193927 addons disable yakd --alsologtostderr -v=1: exit status 11 (249.710453ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:30:53.287681   22673 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:30:53.287797   22673 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:30:53.287806   22673 out.go:374] Setting ErrFile to fd 2...
	I1210 05:30:53.287810   22673 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:30:53.287984   22673 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 05:30:53.288256   22673 mustload.go:66] Loading cluster: addons-193927
	I1210 05:30:53.288649   22673 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:30:53.288674   22673 addons.go:622] checking whether the cluster is paused
	I1210 05:30:53.288781   22673 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:30:53.288796   22673 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:30:53.289281   22673 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:30:53.308907   22673 ssh_runner.go:195] Run: systemctl --version
	I1210 05:30:53.308974   22673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:30:53.329285   22673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:30:53.426561   22673 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:30:53.426644   22673 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:30:53.454325   22673 cri.go:89] found id: "5d4a1d5da42cdea143fe7688a27cc37ad2f4a146e885ca2f25810e17c009c709"
	I1210 05:30:53.454349   22673 cri.go:89] found id: "cd1f99729cdad01237d94da575e9488a1f060060c59b7858ae362146b66a5f07"
	I1210 05:30:53.454355   22673 cri.go:89] found id: "95ca228da5e9f4d7e909834091b594c45c7208f8d3b2a571abd619c956f77482"
	I1210 05:30:53.454360   22673 cri.go:89] found id: "5b9cf05c0ab5e38bbe9cfe2273f65fd13711e93896f218f41f79c0660b03dc90"
	I1210 05:30:53.454364   22673 cri.go:89] found id: "6c8ffe9e271a1d0db9a74167b5f966efa0dfe72c5e1662e507abbd8e9663fab6"
	I1210 05:30:53.454369   22673 cri.go:89] found id: "42c477d9d74b6c98a8cb5af1e1f7e3db2b09e988a7fae2733bc43265b154797e"
	I1210 05:30:53.454373   22673 cri.go:89] found id: "dc84fc8b0d7ae154e64ef5052f253bba4217a7a5e867c4712f16ca97cf539e99"
	I1210 05:30:53.454377   22673 cri.go:89] found id: "217c6052689f8c587e315acc25a1b2849ce25e9b39451148233d1f6aa28f814e"
	I1210 05:30:53.454381   22673 cri.go:89] found id: "f02ac84563fd8d04d4258d15032b4be710d23f174fe6977d0c77a2b2231ceb66"
	I1210 05:30:53.454394   22673 cri.go:89] found id: "c2dd148c15de2f4ce8a5067f1648f58cbe34599d18b462157fbe53d635a2ae2d"
	I1210 05:30:53.454402   22673 cri.go:89] found id: "bd251ebea34ff80ac352d3659aca4e9dd92516b5b29e42918a88320e6d6c00a0"
	I1210 05:30:53.454407   22673 cri.go:89] found id: "976a8b19e2a981db8eb4cccab7c5e66c6de34da6ca5d67769e3041ff93464bb0"
	I1210 05:30:53.454413   22673 cri.go:89] found id: "7a65eea81e573477a1e4b111a57afc5d01badf2c22b3244ab34f401df736478b"
	I1210 05:30:53.454420   22673 cri.go:89] found id: "05be8bf506f18516a5e7ba92ec9ee9f1ddb3e678cbc2fbd6fa67ed3d79c01d6f"
	I1210 05:30:53.454428   22673 cri.go:89] found id: "c2e8fc6eb52c03a13e3410eba38a1f93510543ca9cc1f2dce8cf44f724ebb51e"
	I1210 05:30:53.454452   22673 cri.go:89] found id: "cf1e8860d68b3fed3b954f03825d2e52dc0a76a1d91f34d013990bee525f9ba1"
	I1210 05:30:53.454462   22673 cri.go:89] found id: "3db45466cabf41054b120f3c6070f1ec70a8b2841948afaab355e73b36c7f163"
	I1210 05:30:53.454468   22673 cri.go:89] found id: "a56c2752b1ef94dc626cb6a5ebe9da70da07ed988ba80bc8dbc476de7200232b"
	I1210 05:30:53.454472   22673 cri.go:89] found id: "367aea18176f031be6232fd30a314c767c7759fec05c5e3ffdaf569336ad6525"
	I1210 05:30:53.454476   22673 cri.go:89] found id: "206f9657e022657209d8593c82dd3d5694e511c41253aa91adaa9064170bed8c"
	I1210 05:30:53.454484   22673 cri.go:89] found id: "6501c9a3d5552292acb572b481eea754ae6f17f2913f63dc303d6291da022ed6"
	I1210 05:30:53.454489   22673 cri.go:89] found id: "0a2be4003b1b30e3df7421633de714b1825d05f4ed06a10d8a16f03f12641dd3"
	I1210 05:30:53.454496   22673 cri.go:89] found id: "3b5e4f42b79e944fc79e354eea5dfaeef38e9a172b426d0cd69186d52604413a"
	I1210 05:30:53.454501   22673 cri.go:89] found id: "b2eb3db5b9910016ae4a73bcd8196a9aed9e2b0ea078772712f9b76865532a26"
	I1210 05:30:53.454507   22673 cri.go:89] found id: ""
	I1210 05:30:53.454548   22673 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 05:30:53.467529   22673 out.go:203] 
	W1210 05:30:53.468588   22673 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:30:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:30:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 05:30:53.468608   22673 out.go:285] * 
	* 
	W1210 05:30:53.471965   22673 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:30:53.473028   22673 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-193927 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.25s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:353: "amd-gpu-device-plugin-742mx" [8a174135-0be6-4c4b-900b-8903ba2adc24] Running
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003438496s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-193927 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-193927 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (254.099513ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:30:42.789637   21307 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:30:42.789897   21307 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:30:42.789907   21307 out.go:374] Setting ErrFile to fd 2...
	I1210 05:30:42.789911   21307 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:30:42.790135   21307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 05:30:42.790379   21307 mustload.go:66] Loading cluster: addons-193927
	I1210 05:30:42.790655   21307 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:30:42.790671   21307 addons.go:622] checking whether the cluster is paused
	I1210 05:30:42.790745   21307 config.go:182] Loaded profile config "addons-193927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:30:42.790752   21307 host.go:66] Checking if "addons-193927" exists ...
	I1210 05:30:42.791144   21307 cli_runner.go:164] Run: docker container inspect addons-193927 --format={{.State.Status}}
	I1210 05:30:42.808403   21307 ssh_runner.go:195] Run: systemctl --version
	I1210 05:30:42.808481   21307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193927
	I1210 05:30:42.826593   21307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/addons-193927/id_rsa Username:docker}
	I1210 05:30:42.920534   21307 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:30:42.920612   21307 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:30:42.951491   21307 cri.go:89] found id: "5d4a1d5da42cdea143fe7688a27cc37ad2f4a146e885ca2f25810e17c009c709"
	I1210 05:30:42.951506   21307 cri.go:89] found id: "cd1f99729cdad01237d94da575e9488a1f060060c59b7858ae362146b66a5f07"
	I1210 05:30:42.951510   21307 cri.go:89] found id: "95ca228da5e9f4d7e909834091b594c45c7208f8d3b2a571abd619c956f77482"
	I1210 05:30:42.951513   21307 cri.go:89] found id: "5b9cf05c0ab5e38bbe9cfe2273f65fd13711e93896f218f41f79c0660b03dc90"
	I1210 05:30:42.951522   21307 cri.go:89] found id: "6c8ffe9e271a1d0db9a74167b5f966efa0dfe72c5e1662e507abbd8e9663fab6"
	I1210 05:30:42.951525   21307 cri.go:89] found id: "42c477d9d74b6c98a8cb5af1e1f7e3db2b09e988a7fae2733bc43265b154797e"
	I1210 05:30:42.951528   21307 cri.go:89] found id: "dc84fc8b0d7ae154e64ef5052f253bba4217a7a5e867c4712f16ca97cf539e99"
	I1210 05:30:42.951530   21307 cri.go:89] found id: "217c6052689f8c587e315acc25a1b2849ce25e9b39451148233d1f6aa28f814e"
	I1210 05:30:42.951533   21307 cri.go:89] found id: "f02ac84563fd8d04d4258d15032b4be710d23f174fe6977d0c77a2b2231ceb66"
	I1210 05:30:42.951538   21307 cri.go:89] found id: "c2dd148c15de2f4ce8a5067f1648f58cbe34599d18b462157fbe53d635a2ae2d"
	I1210 05:30:42.951541   21307 cri.go:89] found id: "bd251ebea34ff80ac352d3659aca4e9dd92516b5b29e42918a88320e6d6c00a0"
	I1210 05:30:42.951544   21307 cri.go:89] found id: "976a8b19e2a981db8eb4cccab7c5e66c6de34da6ca5d67769e3041ff93464bb0"
	I1210 05:30:42.951547   21307 cri.go:89] found id: "7a65eea81e573477a1e4b111a57afc5d01badf2c22b3244ab34f401df736478b"
	I1210 05:30:42.951550   21307 cri.go:89] found id: "05be8bf506f18516a5e7ba92ec9ee9f1ddb3e678cbc2fbd6fa67ed3d79c01d6f"
	I1210 05:30:42.951553   21307 cri.go:89] found id: "c2e8fc6eb52c03a13e3410eba38a1f93510543ca9cc1f2dce8cf44f724ebb51e"
	I1210 05:30:42.951560   21307 cri.go:89] found id: "cf1e8860d68b3fed3b954f03825d2e52dc0a76a1d91f34d013990bee525f9ba1"
	I1210 05:30:42.951566   21307 cri.go:89] found id: "3db45466cabf41054b120f3c6070f1ec70a8b2841948afaab355e73b36c7f163"
	I1210 05:30:42.951572   21307 cri.go:89] found id: "a56c2752b1ef94dc626cb6a5ebe9da70da07ed988ba80bc8dbc476de7200232b"
	I1210 05:30:42.951575   21307 cri.go:89] found id: "367aea18176f031be6232fd30a314c767c7759fec05c5e3ffdaf569336ad6525"
	I1210 05:30:42.951578   21307 cri.go:89] found id: "206f9657e022657209d8593c82dd3d5694e511c41253aa91adaa9064170bed8c"
	I1210 05:30:42.951581   21307 cri.go:89] found id: "6501c9a3d5552292acb572b481eea754ae6f17f2913f63dc303d6291da022ed6"
	I1210 05:30:42.951583   21307 cri.go:89] found id: "0a2be4003b1b30e3df7421633de714b1825d05f4ed06a10d8a16f03f12641dd3"
	I1210 05:30:42.951586   21307 cri.go:89] found id: "3b5e4f42b79e944fc79e354eea5dfaeef38e9a172b426d0cd69186d52604413a"
	I1210 05:30:42.951589   21307 cri.go:89] found id: "b2eb3db5b9910016ae4a73bcd8196a9aed9e2b0ea078772712f9b76865532a26"
	I1210 05:30:42.951593   21307 cri.go:89] found id: ""
	I1210 05:30:42.951628   21307 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 05:30:42.965405   21307 out.go:203] 
	W1210 05:30:42.966802   21307 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:30:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:30:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 05:30:42.966825   21307 out.go:285] * 
	* 
	W1210 05:30:42.969730   21307 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:30:42.970855   21307 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-193927 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 image ls --format json --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-604071 image ls --format json --alsologtostderr: (2.262427476s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-604071 image ls --format json --alsologtostderr:
[]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-604071 image ls --format json --alsologtostderr:
I1210 05:37:05.918443   53287 out.go:360] Setting OutFile to fd 1 ...
I1210 05:37:05.918709   53287 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:37:05.918719   53287 out.go:374] Setting ErrFile to fd 2...
I1210 05:37:05.918726   53287 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:37:05.918983   53287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
I1210 05:37:05.919712   53287 config.go:182] Loaded profile config "functional-604071": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1210 05:37:05.919846   53287 config.go:182] Loaded profile config "functional-604071": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1210 05:37:05.920459   53287 cli_runner.go:164] Run: docker container inspect functional-604071 --format={{.State.Status}}
I1210 05:37:05.941628   53287 ssh_runner.go:195] Run: systemctl --version
I1210 05:37:05.941682   53287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-604071
I1210 05:37:05.967244   53287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/functional-604071/id_rsa Username:docker}
I1210 05:37:06.074050   53287 ssh_runner.go:195] Run: sudo crictl images --output json
I1210 05:37:08.102300   53287 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.028198861s)
W1210 05:37:08.102406   53287 cache_images.go:736] Failed to list images for profile functional-604071 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1210 05:37:08.099958    8620 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" filter="image:{}"
time="2025-12-10T05:37:08Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL"
functional_test.go:290: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (2.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (2.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-589967 image ls: (2.309898652s)
functional_test.go:461: expected "kicbase/echo-server:functional-589967" to be loaded into minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (2.85s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-018272 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-018272 --output=json --user=testUser: exit status 80 (1.65087275s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9d43d47d-79d1-4ee8-b2b1-954c349c534f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-018272 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"5105d510-ab41-48b7-bbe2-8763f72f0104","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-10T05:50:28Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"9a8f66a7-bb79-4665-9240-096e3f08411b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-018272 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.82s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-018272 --output=json --user=testUser
E1210 05:50:29.263136    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-018272 --output=json --user=testUser: exit status 80 (1.82238794s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3b810923-d12b-4129-9801-b365e044d108","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-018272 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"d8210af8-2d44-4990-b271-70052bc16296","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-10T05:50:30Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"915b812c-5806-4e79-a13a-b83d5b06c43b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-018272 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.82s)

                                                
                                    
x
+
TestPause/serial/Pause (6.41s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-257171 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-257171 --alsologtostderr -v=5: exit status 80 (1.873960823s)

                                                
                                                
-- stdout --
	* Pausing node pause-257171 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:10:45.862542  306174 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:10:45.862799  306174 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:10:45.862809  306174 out.go:374] Setting ErrFile to fd 2...
	I1210 06:10:45.862816  306174 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:10:45.863040  306174 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 06:10:45.863342  306174 out.go:368] Setting JSON to false
	I1210 06:10:45.863363  306174 mustload.go:66] Loading cluster: pause-257171
	I1210 06:10:45.863757  306174 config.go:182] Loaded profile config "pause-257171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:10:45.864177  306174 cli_runner.go:164] Run: docker container inspect pause-257171 --format={{.State.Status}}
	I1210 06:10:45.884020  306174 host.go:66] Checking if "pause-257171" exists ...
	I1210 06:10:45.884354  306174 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:10:45.947232  306174 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:90 OomKillDisable:false NGoroutines:96 SystemTime:2025-12-10 06:10:45.936369135 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:10:45.948147  306174 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765151505-21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765151505-21409-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-257171 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1210 06:10:45.953207  306174 out.go:179] * Pausing node pause-257171 ... 
	I1210 06:10:45.954349  306174 host.go:66] Checking if "pause-257171" exists ...
	I1210 06:10:45.954708  306174 ssh_runner.go:195] Run: systemctl --version
	I1210 06:10:45.954782  306174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-257171
	I1210 06:10:45.975101  306174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/pause-257171/id_rsa Username:docker}
	I1210 06:10:46.072219  306174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:10:46.086057  306174 pause.go:52] kubelet running: true
	I1210 06:10:46.086140  306174 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:10:46.233545  306174 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:10:46.233634  306174 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:10:46.302578  306174 cri.go:89] found id: "66ade9741737d0618d31b9e331c6d038cdc1e2bb1fd529e9541980bf68e0abec"
	I1210 06:10:46.302602  306174 cri.go:89] found id: "f35e84264756175001d4f7ecb61402e5f125b72604a043a8c128a061d528b9fd"
	I1210 06:10:46.302607  306174 cri.go:89] found id: "a8135c67f54958dd4233b61e0d015aa8778759365567de664bda3e2ba8db00ab"
	I1210 06:10:46.302610  306174 cri.go:89] found id: "107fccc52114724b5c7829573ed47387d6cbba2579d253522d27aec12a9ce2af"
	I1210 06:10:46.302624  306174 cri.go:89] found id: "0f33d39c6190527f6d0e9ac7647ac9706f3280f63c52c18b858fc59065309e3e"
	I1210 06:10:46.302628  306174 cri.go:89] found id: "deaa3b6fcb814994f944b5b7e7ec3daa03eee3377299d9707ee55f5419d0fefe"
	I1210 06:10:46.302632  306174 cri.go:89] found id: "8dddd76371ac09675b74acb4cc1233f21c564c3ec61620eb5510f2aa62d0fd76"
	I1210 06:10:46.302636  306174 cri.go:89] found id: ""
	I1210 06:10:46.302681  306174 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:10:46.314282  306174 retry.go:31] will retry after 327.363117ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:10:46Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:10:46.642688  306174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:10:46.661071  306174 pause.go:52] kubelet running: false
	I1210 06:10:46.661174  306174 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:10:46.844397  306174 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:10:46.844486  306174 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:10:46.942185  306174 cri.go:89] found id: "66ade9741737d0618d31b9e331c6d038cdc1e2bb1fd529e9541980bf68e0abec"
	I1210 06:10:46.942214  306174 cri.go:89] found id: "f35e84264756175001d4f7ecb61402e5f125b72604a043a8c128a061d528b9fd"
	I1210 06:10:46.942220  306174 cri.go:89] found id: "a8135c67f54958dd4233b61e0d015aa8778759365567de664bda3e2ba8db00ab"
	I1210 06:10:46.942294  306174 cri.go:89] found id: "107fccc52114724b5c7829573ed47387d6cbba2579d253522d27aec12a9ce2af"
	I1210 06:10:46.942299  306174 cri.go:89] found id: "0f33d39c6190527f6d0e9ac7647ac9706f3280f63c52c18b858fc59065309e3e"
	I1210 06:10:46.942321  306174 cri.go:89] found id: "deaa3b6fcb814994f944b5b7e7ec3daa03eee3377299d9707ee55f5419d0fefe"
	I1210 06:10:46.942326  306174 cri.go:89] found id: "8dddd76371ac09675b74acb4cc1233f21c564c3ec61620eb5510f2aa62d0fd76"
	I1210 06:10:46.942330  306174 cri.go:89] found id: ""
	I1210 06:10:46.942423  306174 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:10:46.959704  306174 retry.go:31] will retry after 427.611833ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:10:46Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:10:47.388283  306174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:10:47.406245  306174 pause.go:52] kubelet running: false
	I1210 06:10:47.406315  306174 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:10:47.565048  306174 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:10:47.565140  306174 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:10:47.635248  306174 cri.go:89] found id: "66ade9741737d0618d31b9e331c6d038cdc1e2bb1fd529e9541980bf68e0abec"
	I1210 06:10:47.635269  306174 cri.go:89] found id: "f35e84264756175001d4f7ecb61402e5f125b72604a043a8c128a061d528b9fd"
	I1210 06:10:47.635274  306174 cri.go:89] found id: "a8135c67f54958dd4233b61e0d015aa8778759365567de664bda3e2ba8db00ab"
	I1210 06:10:47.635277  306174 cri.go:89] found id: "107fccc52114724b5c7829573ed47387d6cbba2579d253522d27aec12a9ce2af"
	I1210 06:10:47.635280  306174 cri.go:89] found id: "0f33d39c6190527f6d0e9ac7647ac9706f3280f63c52c18b858fc59065309e3e"
	I1210 06:10:47.635284  306174 cri.go:89] found id: "deaa3b6fcb814994f944b5b7e7ec3daa03eee3377299d9707ee55f5419d0fefe"
	I1210 06:10:47.635288  306174 cri.go:89] found id: "8dddd76371ac09675b74acb4cc1233f21c564c3ec61620eb5510f2aa62d0fd76"
	I1210 06:10:47.635293  306174 cri.go:89] found id: ""
	I1210 06:10:47.635347  306174 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:10:47.650840  306174 out.go:203] 
	W1210 06:10:47.651938  306174 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:10:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:10:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 06:10:47.651958  306174 out.go:285] * 
	* 
	W1210 06:10:47.658842  306174 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:10:47.660014  306174 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-257171 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-257171
helpers_test.go:244: (dbg) docker inspect pause-257171:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "93be872b4015c629f55bf45ebefb4592f711820778368fef4cbafa09515cd1eb",
	        "Created": "2025-12-10T06:09:52.796235718Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 288929,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:09:52.824493444Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/93be872b4015c629f55bf45ebefb4592f711820778368fef4cbafa09515cd1eb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/93be872b4015c629f55bf45ebefb4592f711820778368fef4cbafa09515cd1eb/hostname",
	        "HostsPath": "/var/lib/docker/containers/93be872b4015c629f55bf45ebefb4592f711820778368fef4cbafa09515cd1eb/hosts",
	        "LogPath": "/var/lib/docker/containers/93be872b4015c629f55bf45ebefb4592f711820778368fef4cbafa09515cd1eb/93be872b4015c629f55bf45ebefb4592f711820778368fef4cbafa09515cd1eb-json.log",
	        "Name": "/pause-257171",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-257171:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-257171",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "93be872b4015c629f55bf45ebefb4592f711820778368fef4cbafa09515cd1eb",
	                "LowerDir": "/var/lib/docker/overlay2/f2a683995b3b2ca11bd33ed8e07ab9d4752c713f7ef23d0f3e73756731530cdc-init/diff:/var/lib/docker/overlay2/b62e2f8db4877fd6b32453256d2aeab173581bfdfbed6c87a5c3b6dd49dbb983/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2a683995b3b2ca11bd33ed8e07ab9d4752c713f7ef23d0f3e73756731530cdc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2a683995b3b2ca11bd33ed8e07ab9d4752c713f7ef23d0f3e73756731530cdc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2a683995b3b2ca11bd33ed8e07ab9d4752c713f7ef23d0f3e73756731530cdc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-257171",
	                "Source": "/var/lib/docker/volumes/pause-257171/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-257171",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-257171",
	                "name.minikube.sigs.k8s.io": "pause-257171",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3a31e798092a8ca37a84b10bde16f09c3bb95260da866b78352d49d559a705e6",
	            "SandboxKey": "/var/run/docker/netns/3a31e798092a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-257171": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0397b6bc6aea8076e07511fb3421bf43e750217f467f117f9cb843e5fc24d81f",
	                    "EndpointID": "e46b9ae2c1fc1623ceab885d7e3d571bec114ebc25b1bc72379aa386ff90c087",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "ce:ce:0e:46:75:19",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-257171",
	                        "93be872b4015"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-257171 -n pause-257171
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-257171 -n pause-257171: exit status 2 (363.61075ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-257171 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-257171 logs -n 25: (1.217116847s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-094798 sudo systemctl status crio --all --full --no-pager                                                                                                                                                       │ cilium-094798             │ jenkins │ v1.37.0 │ 10 Dec 25 06:06 UTC │                     │
	│ ssh     │ -p cilium-094798 sudo systemctl cat crio --no-pager                                                                                                                                                                       │ cilium-094798             │ jenkins │ v1.37.0 │ 10 Dec 25 06:06 UTC │                     │
	│ ssh     │ -p cilium-094798 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                             │ cilium-094798             │ jenkins │ v1.37.0 │ 10 Dec 25 06:06 UTC │                     │
	│ ssh     │ -p cilium-094798 sudo crio config                                                                                                                                                                                         │ cilium-094798             │ jenkins │ v1.37.0 │ 10 Dec 25 06:06 UTC │                     │
	│ delete  │ -p cilium-094798                                                                                                                                                                                                          │ cilium-094798             │ jenkins │ v1.37.0 │ 10 Dec 25 06:06 UTC │ 10 Dec 25 06:06 UTC │
	│ start   │ -p force-systemd-env-872487 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                │ force-systemd-env-872487  │ jenkins │ v1.37.0 │ 10 Dec 25 06:06 UTC │ 10 Dec 25 06:07 UTC │
	│ delete  │ -p force-systemd-env-872487                                                                                                                                                                                               │ force-systemd-env-872487  │ jenkins │ v1.37.0 │ 10 Dec 25 06:07 UTC │ 10 Dec 25 06:07 UTC │
	│ start   │ -p cert-expiration-790790 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                    │ cert-expiration-790790    │ jenkins │ v1.37.0 │ 10 Dec 25 06:07 UTC │ 10 Dec 25 06:07 UTC │
	│ start   │ -p kubernetes-upgrade-196025 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                         │ kubernetes-upgrade-196025 │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │                     │
	│ start   │ -p kubernetes-upgrade-196025 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                             │ kubernetes-upgrade-196025 │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ delete  │ -p kubernetes-upgrade-196025                                                                                                                                                                                              │ kubernetes-upgrade-196025 │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ start   │ -p cert-options-357277 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio │ cert-options-357277       │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ ssh     │ cert-options-357277 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                               │ cert-options-357277       │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ ssh     │ -p cert-options-357277 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                             │ cert-options-357277       │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ delete  │ -p cert-options-357277                                                                                                                                                                                                    │ cert-options-357277       │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ start   │ -p pause-257171 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                 │ pause-257171              │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:10 UTC │
	│ delete  │ -p stopped-upgrade-616121                                                                                                                                                                                                 │ stopped-upgrade-616121    │ jenkins │ v1.37.0 │ 10 Dec 25 06:10 UTC │ 10 Dec 25 06:10 UTC │
	│ start   │ -p auto-094798 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                   │ auto-094798               │ jenkins │ v1.37.0 │ 10 Dec 25 06:10 UTC │                     │
	│ start   │ -p cert-expiration-790790 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                 │ cert-expiration-790790    │ jenkins │ v1.37.0 │ 10 Dec 25 06:10 UTC │ 10 Dec 25 06:10 UTC │
	│ start   │ -p pause-257171 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                          │ pause-257171              │ jenkins │ v1.37.0 │ 10 Dec 25 06:10 UTC │ 10 Dec 25 06:10 UTC │
	│ delete  │ -p cert-expiration-790790                                                                                                                                                                                                 │ cert-expiration-790790    │ jenkins │ v1.37.0 │ 10 Dec 25 06:10 UTC │ 10 Dec 25 06:10 UTC │
	│ start   │ -p kindnet-094798 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                  │ kindnet-094798            │ jenkins │ v1.37.0 │ 10 Dec 25 06:10 UTC │                     │
	│ delete  │ -p running-upgrade-897548                                                                                                                                                                                                 │ running-upgrade-897548    │ jenkins │ v1.37.0 │ 10 Dec 25 06:10 UTC │ 10 Dec 25 06:10 UTC │
	│ start   │ -p calico-094798 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                                                                                    │ calico-094798             │ jenkins │ v1.37.0 │ 10 Dec 25 06:10 UTC │                     │
	│ pause   │ -p pause-257171 --alsologtostderr -v=5                                                                                                                                                                                    │ pause-257171              │ jenkins │ v1.37.0 │ 10 Dec 25 06:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:10:43
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:10:43.100639  304042 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:10:43.101008  304042 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:10:43.101018  304042 out.go:374] Setting ErrFile to fd 2...
	I1210 06:10:43.101024  304042 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:10:43.101312  304042 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 06:10:43.101864  304042 out.go:368] Setting JSON to false
	I1210 06:10:43.103445  304042 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3187,"bootTime":1765343856,"procs":425,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:10:43.103525  304042 start.go:143] virtualization: kvm guest
	I1210 06:10:43.105741  304042 out.go:179] * [calico-094798] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:10:43.106996  304042 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:10:43.107006  304042 notify.go:221] Checking for updates...
	I1210 06:10:43.109352  304042 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:10:43.111837  304042 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:10:43.116375  304042 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube
	I1210 06:10:43.117680  304042 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:10:43.118770  304042 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:10:43.120543  304042 config.go:182] Loaded profile config "auto-094798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:10:43.120712  304042 config.go:182] Loaded profile config "kindnet-094798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:10:43.120905  304042 config.go:182] Loaded profile config "pause-257171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:10:43.121049  304042 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:10:43.149931  304042 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:10:43.150065  304042 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:10:43.224585  304042 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:false NGoroutines:86 SystemTime:2025-12-10 06:10:43.213492096 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:10:43.224693  304042 docker.go:319] overlay module found
	I1210 06:10:43.230445  304042 out.go:179] * Using the docker driver based on user configuration
	I1210 06:10:43.232279  304042 start.go:309] selected driver: docker
	I1210 06:10:43.232290  304042 start.go:927] validating driver "docker" against <nil>
	I1210 06:10:43.232304  304042 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:10:43.232941  304042 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:10:43.298031  304042 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:false NGoroutines:86 SystemTime:2025-12-10 06:10:43.28651193 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:10:43.298261  304042 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 06:10:43.298552  304042 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:10:43.300190  304042 out.go:179] * Using Docker driver with root privileges
	I1210 06:10:43.301339  304042 cni.go:84] Creating CNI manager for "calico"
	I1210 06:10:43.301361  304042 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1210 06:10:43.301444  304042 start.go:353] cluster config:
	{Name:calico-094798 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:calico-094798 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:10:43.302698  304042 out.go:179] * Starting "calico-094798" primary control-plane node in "calico-094798" cluster
	I1210 06:10:43.304333  304042 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:10:43.306523  304042 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:10:43.307502  304042 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 06:10:43.307604  304042 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 06:10:43.333694  304042 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:10:43.333722  304042 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 06:10:43.335295  304042 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1210 06:10:43.418981  304042 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1210 06:10:43.419139  304042 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/calico-094798/config.json ...
	I1210 06:10:43.419190  304042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/calico-094798/config.json: {Name:mk2284af8bfa69aea07c58a07e318e3ef2d6a29f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:10:43.419261  304042 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:10:43.419361  304042 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:10:43.419410  304042 start.go:360] acquireMachinesLock for calico-094798: {Name:mk11609186fd3775863e23d3c3f6cd14ef0616fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:10:43.419479  304042 start.go:364] duration metric: took 46.952µs to acquireMachinesLock for "calico-094798"
	I1210 06:10:43.419501  304042 start.go:93] Provisioning new machine with config: &{Name:calico-094798 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:calico-094798 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:10:43.419609  304042 start.go:125] createHost starting for "" (driver="docker")
	I1210 06:10:42.146840  300941 cli_runner.go:164] Run: docker network inspect pause-257171 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:10:42.167292  300941 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 06:10:42.171675  300941 kubeadm.go:884] updating cluster {Name:pause-257171 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-257171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:10:42.172063  300941 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:10:42.387491  300941 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:10:42.529985  300941 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:10:42.682189  300941 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 06:10:42.682260  300941 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:10:42.716579  300941 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:10:42.716600  300941 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:10:42.716608  300941 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.3 crio true true} ...
	I1210 06:10:42.716732  300941 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-257171 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:pause-257171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:10:42.716814  300941 ssh_runner.go:195] Run: crio config
	I1210 06:10:42.776935  300941 cni.go:84] Creating CNI manager for ""
	I1210 06:10:42.776962  300941 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:10:42.776982  300941 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:10:42.777012  300941 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-257171 NodeName:pause-257171 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:10:42.777205  300941 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-257171"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:10:42.777278  300941 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1210 06:10:42.786926  300941 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:10:42.787005  300941 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:10:42.800455  300941 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1210 06:10:42.825837  300941 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 06:10:42.848825  300941 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1210 06:10:42.880752  300941 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:10:42.886976  300941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:10:43.033982  300941 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:10:43.048691  300941 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/pause-257171 for IP: 192.168.76.2
	I1210 06:10:43.048713  300941 certs.go:195] generating shared ca certs ...
	I1210 06:10:43.048731  300941 certs.go:227] acquiring lock for ca certs: {Name:mka90f54d579d39a8508aa46a6cef002ccad5d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:10:43.048889  300941 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key
	I1210 06:10:43.048950  300941 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key
	I1210 06:10:43.048963  300941 certs.go:257] generating profile certs ...
	I1210 06:10:43.049093  300941 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/pause-257171/client.key
	I1210 06:10:43.049175  300941 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/pause-257171/apiserver.key.49fe122d
	I1210 06:10:43.049238  300941 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/pause-257171/proxy-client.key
	I1210 06:10:43.049379  300941 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253.pem (1338 bytes)
	W1210 06:10:43.049422  300941 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253_empty.pem, impossibly tiny 0 bytes
	I1210 06:10:43.049436  300941 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:10:43.049476  300941 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:10:43.049511  300941 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:10:43.049551  300941 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem (1679 bytes)
	I1210 06:10:43.049613  300941 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:10:43.050464  300941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:10:43.068614  300941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:10:43.092546  300941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:10:43.116372  300941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:10:43.139733  300941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/pause-257171/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 06:10:43.160779  300941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/pause-257171/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:10:43.187280  300941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/pause-257171/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:10:43.211991  300941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/pause-257171/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 06:10:43.232266  300941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /usr/share/ca-certificates/92532.pem (1708 bytes)
	I1210 06:10:43.250936  300941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:10:43.272678  300941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253.pem --> /usr/share/ca-certificates/9253.pem (1338 bytes)
	I1210 06:10:43.295129  300941 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:10:43.309943  300941 ssh_runner.go:195] Run: openssl version
	I1210 06:10:43.316756  300941 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/92532.pem
	I1210 06:10:43.324333  300941 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/92532.pem /etc/ssl/certs/92532.pem
	I1210 06:10:43.332538  300941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92532.pem
	I1210 06:10:43.336334  300941 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:37 /usr/share/ca-certificates/92532.pem
	I1210 06:10:43.336377  300941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92532.pem
	I1210 06:10:43.373442  300941 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:10:43.381417  300941 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:10:43.389224  300941 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:10:43.397735  300941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:10:43.402046  300941 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:10:43.402111  300941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:10:43.442306  300941 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:10:43.452375  300941 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9253.pem
	I1210 06:10:43.462434  300941 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9253.pem /etc/ssl/certs/9253.pem
	I1210 06:10:43.471617  300941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9253.pem
	I1210 06:10:43.476749  300941 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:37 /usr/share/ca-certificates/9253.pem
	I1210 06:10:43.476807  300941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9253.pem
	I1210 06:10:43.526857  300941 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:10:43.534947  300941 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:10:43.538927  300941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:10:43.581137  300941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:10:43.632644  300941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:10:43.673982  300941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:10:43.711699  300941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:10:43.754802  300941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:10:43.804206  300941 kubeadm.go:401] StartCluster: {Name:pause-257171 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-257171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:10:43.804343  300941 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:10:43.804396  300941 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:10:43.834614  300941 cri.go:89] found id: "66ade9741737d0618d31b9e331c6d038cdc1e2bb1fd529e9541980bf68e0abec"
	I1210 06:10:43.834637  300941 cri.go:89] found id: "f35e84264756175001d4f7ecb61402e5f125b72604a043a8c128a061d528b9fd"
	I1210 06:10:43.834643  300941 cri.go:89] found id: "a8135c67f54958dd4233b61e0d015aa8778759365567de664bda3e2ba8db00ab"
	I1210 06:10:43.834647  300941 cri.go:89] found id: "107fccc52114724b5c7829573ed47387d6cbba2579d253522d27aec12a9ce2af"
	I1210 06:10:43.834651  300941 cri.go:89] found id: "0f33d39c6190527f6d0e9ac7647ac9706f3280f63c52c18b858fc59065309e3e"
	I1210 06:10:43.834662  300941 cri.go:89] found id: "deaa3b6fcb814994f944b5b7e7ec3daa03eee3377299d9707ee55f5419d0fefe"
	I1210 06:10:43.834666  300941 cri.go:89] found id: "8dddd76371ac09675b74acb4cc1233f21c564c3ec61620eb5510f2aa62d0fd76"
	I1210 06:10:43.834673  300941 cri.go:89] found id: ""
	I1210 06:10:43.834715  300941 ssh_runner.go:195] Run: sudo runc list -f json
	W1210 06:10:43.847718  300941 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:10:43Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:10:43.847793  300941 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:10:43.857915  300941 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:10:43.857934  300941 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:10:43.857979  300941 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:10:43.870726  300941 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:10:43.871357  300941 kubeconfig.go:125] found "pause-257171" server: "https://192.168.76.2:8443"
	I1210 06:10:43.872154  300941 kapi.go:59] client config for pause-257171: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22094-5725/.minikube/profiles/pause-257171/client.crt", KeyFile:"/home/jenkins/minikube-integration/22094-5725/.minikube/profiles/pause-257171/client.key", CAFile:"/home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 06:10:43.872726  300941 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 06:10:43.872745  300941 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 06:10:43.872752  300941 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 06:10:43.872758  300941 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 06:10:43.872763  300941 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 06:10:43.873205  300941 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:10:43.881790  300941 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1210 06:10:43.881824  300941 kubeadm.go:602] duration metric: took 23.883302ms to restartPrimaryControlPlane
	I1210 06:10:43.881835  300941 kubeadm.go:403] duration metric: took 77.63898ms to StartCluster
	I1210 06:10:43.881853  300941 settings.go:142] acquiring lock: {Name:mk8c38e27b37253ca8cb2a2adf6342f0db270902 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:10:43.881925  300941 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:10:43.882513  300941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/kubeconfig: {Name:mkfa60e97179780abb80cddf99075aed36301884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:10:43.882745  300941 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:10:43.882870  300941 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:10:43.882989  300941 config.go:182] Loaded profile config "pause-257171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:10:43.884376  300941 out.go:179] * Verifying Kubernetes components...
	I1210 06:10:43.884374  300941 out.go:179] * Enabled addons: 
	I1210 06:10:40.872976  302200 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:10:40.874514  302200 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 06:10:40.874766  302200 start.go:159] libmachine.API.Create for "kindnet-094798" (driver="docker")
	I1210 06:10:40.874803  302200 client.go:173] LocalClient.Create starting
	I1210 06:10:40.874905  302200 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem
	I1210 06:10:40.874946  302200 main.go:143] libmachine: Decoding PEM data...
	I1210 06:10:40.874971  302200 main.go:143] libmachine: Parsing certificate...
	I1210 06:10:40.875029  302200 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem
	I1210 06:10:40.875055  302200 main.go:143] libmachine: Decoding PEM data...
	I1210 06:10:40.875069  302200 main.go:143] libmachine: Parsing certificate...
	I1210 06:10:40.875529  302200 cli_runner.go:164] Run: docker network inspect kindnet-094798 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 06:10:40.899685  302200 cli_runner.go:211] docker network inspect kindnet-094798 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 06:10:40.899834  302200 network_create.go:284] running [docker network inspect kindnet-094798] to gather additional debugging logs...
	I1210 06:10:40.899863  302200 cli_runner.go:164] Run: docker network inspect kindnet-094798
	W1210 06:10:40.928166  302200 cli_runner.go:211] docker network inspect kindnet-094798 returned with exit code 1
	I1210 06:10:40.928199  302200 network_create.go:287] error running [docker network inspect kindnet-094798]: docker network inspect kindnet-094798: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-094798 not found
	I1210 06:10:40.928216  302200 network_create.go:289] output of [docker network inspect kindnet-094798]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-094798 not found
	
	** /stderr **
	I1210 06:10:40.928365  302200 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:10:40.956957  302200 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9ebf62c95cf7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d2:a8:ac:6e:16:1a} reservation:<nil>}
	I1210 06:10:40.959073  302200 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ad22705e186e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:52:8a:92:75:2c:7b} reservation:<nil>}
	I1210 06:10:40.959841  302200 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-782a6994f202 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3e:35:84:e8:81:18} reservation:<nil>}
	I1210 06:10:40.960683  302200 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-0397b6bc6aea IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:92:c5:49:61:c0:1c} reservation:<nil>}
	I1210 06:10:40.961771  302200 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ec5750}
	I1210 06:10:40.961819  302200 network_create.go:124] attempt to create docker network kindnet-094798 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1210 06:10:40.961886  302200 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-094798 kindnet-094798
	I1210 06:10:41.019644  302200 network_create.go:108] docker network kindnet-094798 192.168.85.0/24 created
	I1210 06:10:41.019675  302200 kic.go:121] calculated static IP "192.168.85.2" for the "kindnet-094798" container
	I1210 06:10:41.019743  302200 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 06:10:41.038354  302200 cli_runner.go:164] Run: docker volume create kindnet-094798 --label name.minikube.sigs.k8s.io=kindnet-094798 --label created_by.minikube.sigs.k8s.io=true
	I1210 06:10:41.056638  302200 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:10:41.057440  302200 oci.go:103] Successfully created a docker volume kindnet-094798
	I1210 06:10:41.057514  302200 cli_runner.go:164] Run: docker run --rm --name kindnet-094798-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-094798 --entrypoint /usr/bin/test -v kindnet-094798:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 06:10:41.206379  302200 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:10:41.334454  302200 cache.go:107] acquiring lock: {Name:mk0763a50664c56b0862900e71862307cba94d4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:10:41.334471  302200 cache.go:107] acquiring lock: {Name:mk796942baeaa838a47daad2be5ca7532234da42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:10:41.334505  302200 cache.go:107] acquiring lock: {Name:mkcb073544c2d92de0e0765e38c37b4f4d2ac46b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:10:41.334510  302200 cache.go:107] acquiring lock: {Name:mkd670cede0997c7eb0e9bd388a82e1cb2741031 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:10:41.334471  302200 cache.go:107] acquiring lock: {Name:mkc3a95f67321b2fa8faeb966829fb60cf65d25d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:10:41.334476  302200 cache.go:107] acquiring lock: {Name:mkdd768341d1a3481ecaec697219b32d4a715834 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:10:41.334508  302200 cache.go:107] acquiring lock: {Name:mk4d792f4bac33dc8779d7cc5ff40393c94e0ea4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:10:41.334583  302200 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:10:41.334601  302200 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 166.467µs
	I1210 06:10:41.334612  302200 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:10:41.334623  302200 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 exists
	I1210 06:10:41.334637  302200 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1210 06:10:41.334649  302200 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 exists
	I1210 06:10:41.334649  302200 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 179.947µs
	I1210 06:10:41.334657  302200 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1210 06:10:41.334658  302200 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3" took 154.857µs
	I1210 06:10:41.334668  302200 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:10:41.334674  302200 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 177.71µs
	I1210 06:10:41.334683  302200 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:10:41.334669  302200 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 succeeded
	I1210 06:10:41.334687  302200 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 exists
	I1210 06:10:41.334634  302200 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3" took 128.85µs
	I1210 06:10:41.334716  302200 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3" took 256.883µs
	I1210 06:10:41.334726  302200 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 succeeded
	I1210 06:10:41.334696  302200 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1210 06:10:41.334714  302200 cache.go:107] acquiring lock: {Name:mk4839690ba979036496a7cee1de2814aaad3bf0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:10:41.334752  302200 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 291.13µs
	I1210 06:10:41.334749  302200 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 succeeded
	I1210 06:10:41.334779  302200 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1210 06:10:41.334842  302200 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 exists
	I1210 06:10:41.334860  302200 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3" took 191.547µs
	I1210 06:10:41.334871  302200 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 succeeded
	I1210 06:10:41.334887  302200 cache.go:87] Successfully saved all images to host disk.
	I1210 06:10:42.015731  302200 oci.go:107] Successfully prepared a docker volume kindnet-094798
	I1210 06:10:42.015790  302200 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	W1210 06:10:42.015903  302200 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1210 06:10:42.015938  302200 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1210 06:10:42.016008  302200 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 06:10:42.082131  302200 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-094798 --name kindnet-094798 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-094798 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-094798 --network kindnet-094798 --ip 192.168.85.2 --volume kindnet-094798:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 06:10:42.461394  302200 cli_runner.go:164] Run: docker container inspect kindnet-094798 --format={{.State.Running}}
	I1210 06:10:42.478783  302200 cli_runner.go:164] Run: docker container inspect kindnet-094798 --format={{.State.Status}}
	I1210 06:10:42.497133  302200 cli_runner.go:164] Run: docker exec kindnet-094798 stat /var/lib/dpkg/alternatives/iptables
	I1210 06:10:42.549911  302200 oci.go:144] the created container "kindnet-094798" has a running status.
	I1210 06:10:42.549944  302200 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22094-5725/.minikube/machines/kindnet-094798/id_rsa...
	I1210 06:10:42.613910  302200 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22094-5725/.minikube/machines/kindnet-094798/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 06:10:42.791148  302200 cli_runner.go:164] Run: docker container inspect kindnet-094798 --format={{.State.Status}}
	I1210 06:10:42.818281  302200 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 06:10:42.818302  302200 kic_runner.go:114] Args: [docker exec --privileged kindnet-094798 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 06:10:42.885896  302200 cli_runner.go:164] Run: docker container inspect kindnet-094798 --format={{.State.Status}}
	I1210 06:10:42.913073  302200 machine.go:94] provisionDockerMachine start ...
	I1210 06:10:42.913178  302200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-094798
	I1210 06:10:42.939674  302200 main.go:143] libmachine: Using SSH client type: native
	I1210 06:10:42.940038  302200 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1210 06:10:42.940058  302200 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:10:43.093738  302200 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-094798
	
	I1210 06:10:43.093777  302200 ubuntu.go:182] provisioning hostname "kindnet-094798"
	I1210 06:10:43.093846  302200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-094798
	I1210 06:10:43.119149  302200 main.go:143] libmachine: Using SSH client type: native
	I1210 06:10:43.119474  302200 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1210 06:10:43.119497  302200 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-094798 && echo "kindnet-094798" | sudo tee /etc/hostname
	I1210 06:10:43.289995  302200 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-094798
	
	I1210 06:10:43.290163  302200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-094798
	I1210 06:10:43.310205  302200 main.go:143] libmachine: Using SSH client type: native
	I1210 06:10:43.310492  302200 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1210 06:10:43.310518  302200 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-094798' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-094798/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-094798' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:10:43.449253  302200 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:10:43.449342  302200 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-5725/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-5725/.minikube}
	I1210 06:10:43.449389  302200 ubuntu.go:190] setting up certificates
	I1210 06:10:43.449420  302200 provision.go:84] configureAuth start
	I1210 06:10:43.449475  302200 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-094798
	I1210 06:10:43.471977  302200 provision.go:143] copyHostCerts
	I1210 06:10:43.472033  302200 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem, removing ...
	I1210 06:10:43.472046  302200 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem
	I1210 06:10:43.472139  302200 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem (1078 bytes)
	I1210 06:10:43.472252  302200 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem, removing ...
	I1210 06:10:43.472273  302200 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem
	I1210 06:10:43.472318  302200 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem (1123 bytes)
	I1210 06:10:43.472390  302200 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem, removing ...
	I1210 06:10:43.472405  302200 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem
	I1210 06:10:43.472438  302200 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem (1679 bytes)
	I1210 06:10:43.472516  302200 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem org=jenkins.kindnet-094798 san=[127.0.0.1 192.168.85.2 kindnet-094798 localhost minikube]
	I1210 06:10:43.637009  302200 provision.go:177] copyRemoteCerts
	I1210 06:10:43.637061  302200 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:10:43.637115  302200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-094798
	I1210 06:10:43.658553  302200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/kindnet-094798/id_rsa Username:docker}
	I1210 06:10:43.757886  302200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:10:43.780472  302200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1210 06:10:43.801966  302200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 06:10:43.823504  302200 provision.go:87] duration metric: took 374.064124ms to configureAuth
	I1210 06:10:43.823531  302200 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:10:43.823711  302200 config.go:182] Loaded profile config "kindnet-094798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:10:43.823835  302200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-094798
	I1210 06:10:43.844959  302200 main.go:143] libmachine: Using SSH client type: native
	I1210 06:10:43.845265  302200 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1210 06:10:43.845292  302200 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:10:44.163201  302200 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:10:44.163224  302200 machine.go:97] duration metric: took 1.250112713s to provisionDockerMachine
	I1210 06:10:44.163234  302200 client.go:176] duration metric: took 3.28842395s to LocalClient.Create
	I1210 06:10:44.163245  302200 start.go:167] duration metric: took 3.288482122s to libmachine.API.Create "kindnet-094798"
	I1210 06:10:44.163252  302200 start.go:293] postStartSetup for "kindnet-094798" (driver="docker")
	I1210 06:10:44.163260  302200 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:10:44.163304  302200 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:10:44.163336  302200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-094798
	I1210 06:10:44.183120  302200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/kindnet-094798/id_rsa Username:docker}
	I1210 06:10:44.289321  302200 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:10:44.293234  302200 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:10:44.293266  302200 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:10:44.293276  302200 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/addons for local assets ...
	I1210 06:10:44.293333  302200 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/files for local assets ...
	I1210 06:10:44.293476  302200 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem -> 92532.pem in /etc/ssl/certs
	I1210 06:10:44.293621  302200 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:10:44.301797  302200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:10:44.326196  302200 start.go:296] duration metric: took 162.929971ms for postStartSetup
	I1210 06:10:44.326593  302200 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-094798
	I1210 06:10:44.345760  302200 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/kindnet-094798/config.json ...
	I1210 06:10:44.346051  302200 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:10:44.346160  302200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-094798
	I1210 06:10:44.364405  302200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/kindnet-094798/id_rsa Username:docker}
	I1210 06:10:44.463185  302200 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:10:44.469341  302200 start.go:128] duration metric: took 3.59705785s to createHost
	I1210 06:10:44.469365  302200 start.go:83] releasing machines lock for "kindnet-094798", held for 3.597192515s
	I1210 06:10:44.469435  302200 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-094798
	I1210 06:10:44.491115  302200 ssh_runner.go:195] Run: cat /version.json
	I1210 06:10:44.491132  302200 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:10:44.491167  302200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-094798
	I1210 06:10:44.491208  302200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-094798
	I1210 06:10:44.511371  302200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/kindnet-094798/id_rsa Username:docker}
	I1210 06:10:44.513385  302200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/kindnet-094798/id_rsa Username:docker}
	I1210 06:10:44.689031  302200 ssh_runner.go:195] Run: systemctl --version
	I1210 06:10:44.698132  302200 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:10:44.746671  302200 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:10:44.752056  302200 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:10:44.752146  302200 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:10:44.785107  302200 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 06:10:44.785135  302200 start.go:496] detecting cgroup driver to use...
	I1210 06:10:44.785166  302200 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:10:44.785213  302200 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:10:44.803792  302200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:10:44.816894  302200 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:10:44.816934  302200 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:10:44.834670  302200 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:10:44.852479  302200 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:10:44.960233  302200 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:10:45.064798  302200 docker.go:234] disabling docker service ...
	I1210 06:10:45.064859  302200 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:10:45.086883  302200 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:10:45.100801  302200 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:10:45.191049  302200 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:10:45.280393  302200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:10:45.293555  302200 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:10:45.308439  302200 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:10:45.447452  302200 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:10:45.447507  302200 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:45.457736  302200 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:10:45.457787  302200 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:45.466192  302200 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:45.474474  302200 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:45.482468  302200 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:10:45.490122  302200 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:45.498552  302200 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:43.886148  300941 addons.go:530] duration metric: took 3.299436ms for enable addons: enabled=[]
	I1210 06:10:43.886186  300941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:10:44.025176  300941 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:10:44.042730  300941 node_ready.go:35] waiting up to 6m0s for node "pause-257171" to be "Ready" ...
	I1210 06:10:44.051926  300941 node_ready.go:49] node "pause-257171" is "Ready"
	I1210 06:10:44.051947  300941 node_ready.go:38] duration metric: took 9.176448ms for node "pause-257171" to be "Ready" ...
	I1210 06:10:44.051959  300941 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:10:44.051996  300941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:10:44.064662  300941 api_server.go:72] duration metric: took 181.871597ms to wait for apiserver process to appear ...
	I1210 06:10:44.064688  300941 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:10:44.064706  300941 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:10:44.069621  300941 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1210 06:10:44.070741  300941 api_server.go:141] control plane version: v1.34.3
	I1210 06:10:44.070781  300941 api_server.go:131] duration metric: took 6.085464ms to wait for apiserver health ...
	I1210 06:10:44.070792  300941 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:10:44.074482  300941 system_pods.go:59] 7 kube-system pods found
	I1210 06:10:44.074522  300941 system_pods.go:61] "coredns-66bc5c9577-t6x5x" [b893f947-02d7-41b1-9886-9b0830ddf69c] Running
	I1210 06:10:44.074536  300941 system_pods.go:61] "etcd-pause-257171" [9e550ac3-4987-44a9-9425-c7758c2d698e] Running
	I1210 06:10:44.074544  300941 system_pods.go:61] "kindnet-8nqff" [afb8ed20-85e5-48ca-9b80-aba3e0f6e330] Running
	I1210 06:10:44.074549  300941 system_pods.go:61] "kube-apiserver-pause-257171" [9fade971-ee7e-4542-8911-93a6aa0fed0c] Running
	I1210 06:10:44.074558  300941 system_pods.go:61] "kube-controller-manager-pause-257171" [caac76e5-4ae6-4766-aa2c-31badc24a748] Running
	I1210 06:10:44.074569  300941 system_pods.go:61] "kube-proxy-hd5t7" [5c7c8775-6a41-44e3-b6b1-7a6a2b4c4942] Running
	I1210 06:10:44.074577  300941 system_pods.go:61] "kube-scheduler-pause-257171" [ccca1914-c8e1-4da2-b9fc-60e3b1097de8] Running
	I1210 06:10:44.074585  300941 system_pods.go:74] duration metric: took 3.785527ms to wait for pod list to return data ...
	I1210 06:10:44.074595  300941 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:10:44.077115  300941 default_sa.go:45] found service account: "default"
	I1210 06:10:44.077135  300941 default_sa.go:55] duration metric: took 2.528645ms for default service account to be created ...
	I1210 06:10:44.077144  300941 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 06:10:44.079945  300941 system_pods.go:86] 7 kube-system pods found
	I1210 06:10:44.079972  300941 system_pods.go:89] "coredns-66bc5c9577-t6x5x" [b893f947-02d7-41b1-9886-9b0830ddf69c] Running
	I1210 06:10:44.079980  300941 system_pods.go:89] "etcd-pause-257171" [9e550ac3-4987-44a9-9425-c7758c2d698e] Running
	I1210 06:10:44.079986  300941 system_pods.go:89] "kindnet-8nqff" [afb8ed20-85e5-48ca-9b80-aba3e0f6e330] Running
	I1210 06:10:44.079991  300941 system_pods.go:89] "kube-apiserver-pause-257171" [9fade971-ee7e-4542-8911-93a6aa0fed0c] Running
	I1210 06:10:44.079998  300941 system_pods.go:89] "kube-controller-manager-pause-257171" [caac76e5-4ae6-4766-aa2c-31badc24a748] Running
	I1210 06:10:44.080004  300941 system_pods.go:89] "kube-proxy-hd5t7" [5c7c8775-6a41-44e3-b6b1-7a6a2b4c4942] Running
	I1210 06:10:44.080014  300941 system_pods.go:89] "kube-scheduler-pause-257171" [ccca1914-c8e1-4da2-b9fc-60e3b1097de8] Running
	I1210 06:10:44.080023  300941 system_pods.go:126] duration metric: took 2.873065ms to wait for k8s-apps to be running ...
	I1210 06:10:44.080037  300941 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 06:10:44.080106  300941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:10:44.095165  300941 system_svc.go:56] duration metric: took 15.120844ms WaitForService to wait for kubelet
	I1210 06:10:44.095188  300941 kubeadm.go:587] duration metric: took 212.405865ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:10:44.095207  300941 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:10:44.098252  300941 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:10:44.098277  300941 node_conditions.go:123] node cpu capacity is 8
	I1210 06:10:44.098293  300941 node_conditions.go:105] duration metric: took 3.080616ms to run NodePressure ...
	I1210 06:10:44.098307  300941 start.go:242] waiting for startup goroutines ...
	I1210 06:10:44.098317  300941 start.go:247] waiting for cluster config update ...
	I1210 06:10:44.098328  300941 start.go:256] writing updated cluster config ...
	I1210 06:10:44.099999  300941 ssh_runner.go:195] Run: rm -f paused
	I1210 06:10:44.104185  300941 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:10:44.104786  300941 kapi.go:59] client config for pause-257171: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22094-5725/.minikube/profiles/pause-257171/client.crt", KeyFile:"/home/jenkins/minikube-integration/22094-5725/.minikube/profiles/pause-257171/client.key", CAFile:"/home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 06:10:44.107329  300941 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-t6x5x" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:10:44.112237  300941 pod_ready.go:94] pod "coredns-66bc5c9577-t6x5x" is "Ready"
	I1210 06:10:44.112259  300941 pod_ready.go:86] duration metric: took 4.908574ms for pod "coredns-66bc5c9577-t6x5x" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:10:44.114493  300941 pod_ready.go:83] waiting for pod "etcd-pause-257171" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:10:44.119694  300941 pod_ready.go:94] pod "etcd-pause-257171" is "Ready"
	I1210 06:10:44.119716  300941 pod_ready.go:86] duration metric: took 5.203561ms for pod "etcd-pause-257171" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:10:44.121850  300941 pod_ready.go:83] waiting for pod "kube-apiserver-pause-257171" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:10:44.126981  300941 pod_ready.go:94] pod "kube-apiserver-pause-257171" is "Ready"
	I1210 06:10:44.127001  300941 pod_ready.go:86] duration metric: took 5.133144ms for pod "kube-apiserver-pause-257171" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:10:44.129462  300941 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-257171" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:10:44.509792  300941 pod_ready.go:94] pod "kube-controller-manager-pause-257171" is "Ready"
	I1210 06:10:44.509824  300941 pod_ready.go:86] duration metric: took 380.342511ms for pod "kube-controller-manager-pause-257171" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:10:44.710389  300941 pod_ready.go:83] waiting for pod "kube-proxy-hd5t7" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:10:45.108844  300941 pod_ready.go:94] pod "kube-proxy-hd5t7" is "Ready"
	I1210 06:10:45.108877  300941 pod_ready.go:86] duration metric: took 398.461834ms for pod "kube-proxy-hd5t7" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:10:45.309300  300941 pod_ready.go:83] waiting for pod "kube-scheduler-pause-257171" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:10:45.708540  300941 pod_ready.go:94] pod "kube-scheduler-pause-257171" is "Ready"
	I1210 06:10:45.708574  300941 pod_ready.go:86] duration metric: took 399.245118ms for pod "kube-scheduler-pause-257171" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:10:45.708589  300941 pod_ready.go:40] duration metric: took 1.604335403s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:10:45.755967  300941 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1210 06:10:45.757591  300941 out.go:179] * Done! kubectl is now configured to use "pause-257171" cluster and "default" namespace by default
	I1210 06:10:45.512907  302200 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:45.521342  302200 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:10:45.528560  302200 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:10:45.535580  302200 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:10:45.619386  302200 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:10:45.750523  302200 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:10:45.750593  302200 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:10:45.755017  302200 start.go:564] Will wait 60s for crictl version
	I1210 06:10:45.755072  302200 ssh_runner.go:195] Run: which crictl
	I1210 06:10:45.759162  302200 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:10:45.786024  302200 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:10:45.786132  302200 ssh_runner.go:195] Run: crio --version
	I1210 06:10:45.819667  302200 ssh_runner.go:195] Run: crio --version
	I1210 06:10:45.852447  302200 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1210 06:10:41.529996  295895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:10:41.621360  295895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:10:41.639472  295895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/auto-094798/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1210 06:10:41.658069  295895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/auto-094798/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 06:10:41.675588  295895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/auto-094798/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:10:41.695516  295895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/auto-094798/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:10:41.719465  295895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:10:41.740054  295895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253.pem --> /usr/share/ca-certificates/9253.pem (1338 bytes)
	I1210 06:10:41.757580  295895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /usr/share/ca-certificates/92532.pem (1708 bytes)
	I1210 06:10:41.776676  295895 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:10:41.790522  295895 ssh_runner.go:195] Run: openssl version
	I1210 06:10:41.796973  295895 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:10:41.804111  295895 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:10:41.811770  295895 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:10:41.815900  295895 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:10:41.815950  295895 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:10:41.860019  295895 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:10:41.873722  295895 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 06:10:41.882903  295895 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9253.pem
	I1210 06:10:41.891738  295895 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9253.pem /etc/ssl/certs/9253.pem
	I1210 06:10:41.901891  295895 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9253.pem
	I1210 06:10:41.906203  295895 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:37 /usr/share/ca-certificates/9253.pem
	I1210 06:10:41.906257  295895 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9253.pem
	I1210 06:10:41.954459  295895 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:10:41.965493  295895 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9253.pem /etc/ssl/certs/51391683.0
	I1210 06:10:41.974456  295895 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/92532.pem
	I1210 06:10:41.990887  295895 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/92532.pem /etc/ssl/certs/92532.pem
	I1210 06:10:41.999525  295895 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92532.pem
	I1210 06:10:42.003444  295895 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:37 /usr/share/ca-certificates/92532.pem
	I1210 06:10:42.003510  295895 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92532.pem
	I1210 06:10:42.058065  295895 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:10:42.069005  295895 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/92532.pem /etc/ssl/certs/3ec20f2e.0
	I1210 06:10:42.078573  295895 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:10:42.083210  295895 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 06:10:42.083270  295895 kubeadm.go:401] StartCluster: {Name:auto-094798 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:auto-094798 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:10:42.083366  295895 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:10:42.083412  295895 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:10:42.117874  295895 cri.go:89] found id: ""
	I1210 06:10:42.117944  295895 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:10:42.128028  295895 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:10:42.136643  295895 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:10:42.136703  295895 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:10:42.145951  295895 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:10:42.145971  295895 kubeadm.go:158] found existing configuration files:
	
	I1210 06:10:42.146016  295895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 06:10:42.155833  295895 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:10:42.155897  295895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:10:42.165338  295895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 06:10:42.173982  295895 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:10:42.174021  295895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:10:42.181998  295895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 06:10:42.190321  295895 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:10:42.190376  295895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:10:42.198265  295895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 06:10:42.205910  295895 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:10:42.205959  295895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:10:42.214513  295895 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:10:42.285066  295895 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1210 06:10:42.351519  295895 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:10:43.421322  304042 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 06:10:43.421591  304042 start.go:159] libmachine.API.Create for "calico-094798" (driver="docker")
	I1210 06:10:43.421627  304042 client.go:173] LocalClient.Create starting
	I1210 06:10:43.421702  304042 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem
	I1210 06:10:43.421738  304042 main.go:143] libmachine: Decoding PEM data...
	I1210 06:10:43.421765  304042 main.go:143] libmachine: Parsing certificate...
	I1210 06:10:43.421826  304042 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem
	I1210 06:10:43.421852  304042 main.go:143] libmachine: Decoding PEM data...
	I1210 06:10:43.421866  304042 main.go:143] libmachine: Parsing certificate...
	I1210 06:10:43.422314  304042 cli_runner.go:164] Run: docker network inspect calico-094798 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 06:10:43.443474  304042 cli_runner.go:211] docker network inspect calico-094798 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 06:10:43.443551  304042 network_create.go:284] running [docker network inspect calico-094798] to gather additional debugging logs...
	I1210 06:10:43.443574  304042 cli_runner.go:164] Run: docker network inspect calico-094798
	W1210 06:10:43.463840  304042 cli_runner.go:211] docker network inspect calico-094798 returned with exit code 1
	I1210 06:10:43.463872  304042 network_create.go:287] error running [docker network inspect calico-094798]: docker network inspect calico-094798: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-094798 not found
	I1210 06:10:43.463897  304042 network_create.go:289] output of [docker network inspect calico-094798]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-094798 not found
	
	** /stderr **
	I1210 06:10:43.464025  304042 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:10:43.486790  304042 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9ebf62c95cf7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d2:a8:ac:6e:16:1a} reservation:<nil>}
	I1210 06:10:43.487449  304042 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ad22705e186e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:52:8a:92:75:2c:7b} reservation:<nil>}
	I1210 06:10:43.488048  304042 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-782a6994f202 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3e:35:84:e8:81:18} reservation:<nil>}
	I1210 06:10:43.488636  304042 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-0397b6bc6aea IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:92:c5:49:61:c0:1c} reservation:<nil>}
	I1210 06:10:43.489444  304042 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-d6a8c526f793 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:22:92:d6:c3:5a:8b} reservation:<nil>}
	I1210 06:10:43.490247  304042 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-0014a58b806a IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:86:6f:7e:ad:4f:6b} reservation:<nil>}
	I1210 06:10:43.490983  304042 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f596d0}
	I1210 06:10:43.491010  304042 network_create.go:124] attempt to create docker network calico-094798 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1210 06:10:43.491070  304042 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-094798 calico-094798
	I1210 06:10:43.544306  304042 network_create.go:108] docker network calico-094798 192.168.103.0/24 created
	I1210 06:10:43.544336  304042 kic.go:121] calculated static IP "192.168.103.2" for the "calico-094798" container
	I1210 06:10:43.544408  304042 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 06:10:43.563025  304042 cli_runner.go:164] Run: docker volume create calico-094798 --label name.minikube.sigs.k8s.io=calico-094798 --label created_by.minikube.sigs.k8s.io=true
	I1210 06:10:43.568562  304042 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:10:43.584437  304042 oci.go:103] Successfully created a docker volume calico-094798
	I1210 06:10:43.584514  304042 cli_runner.go:164] Run: docker run --rm --name calico-094798-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-094798 --entrypoint /usr/bin/test -v calico-094798:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 06:10:43.727989  304042 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:10:43.883008  304042 cache.go:107] acquiring lock: {Name:mkc3a95f67321b2fa8faeb966829fb60cf65d25d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:10:43.883044  304042 cache.go:107] acquiring lock: {Name:mkd670cede0997c7eb0e9bd388a82e1cb2741031 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:10:43.883104  304042 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 exists
	I1210 06:10:43.883117  304042 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3" took 118.348µs
	I1210 06:10:43.883128  304042 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 succeeded
	I1210 06:10:43.883148  304042 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:10:43.883146  304042 cache.go:107] acquiring lock: {Name:mkcb073544c2d92de0e0765e38c37b4f4d2ac46b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:10:43.883164  304042 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 122.723µs
	I1210 06:10:43.883174  304042 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:10:43.883189  304042 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 exists
	I1210 06:10:43.883190  304042 cache.go:107] acquiring lock: {Name:mkdd768341d1a3481ecaec697219b32d4a715834 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:10:43.883203  304042 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3" took 59.737µs
	I1210 06:10:43.883214  304042 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 succeeded
	I1210 06:10:43.883209  304042 cache.go:107] acquiring lock: {Name:mk796942baeaa838a47daad2be5ca7532234da42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:10:43.883231  304042 cache.go:107] acquiring lock: {Name:mk4d792f4bac33dc8779d7cc5ff40393c94e0ea4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:10:43.883268  304042 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 exists
	I1210 06:10:43.883251  304042 cache.go:107] acquiring lock: {Name:mk4839690ba979036496a7cee1de2814aaad3bf0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:10:43.883280  304042 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3" took 52.159µs
	I1210 06:10:43.883298  304042 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 succeeded
	I1210 06:10:43.883236  304042 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1210 06:10:43.883304  304042 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 exists
	I1210 06:10:43.883312  304042 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 124.77µs
	I1210 06:10:43.883313  304042 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3" took 64.318µs
	I1210 06:10:43.883320  304042 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1210 06:10:43.883322  304042 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 succeeded
	I1210 06:10:43.883284  304042 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1210 06:10:43.883334  304042 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 146.66µs
	I1210 06:10:43.883344  304042 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1210 06:10:43.883000  304042 cache.go:107] acquiring lock: {Name:mk0763a50664c56b0862900e71862307cba94d4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:10:43.883377  304042 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:10:43.883399  304042 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 416.293µs
	I1210 06:10:43.883407  304042 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:10:43.883421  304042 cache.go:87] Successfully saved all images to host disk.
	I1210 06:10:44.004348  304042 oci.go:107] Successfully prepared a docker volume calico-094798
	I1210 06:10:44.004431  304042 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	W1210 06:10:44.004538  304042 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1210 06:10:44.004579  304042 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1210 06:10:44.004643  304042 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 06:10:44.068299  304042 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-094798 --name calico-094798 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-094798 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-094798 --network calico-094798 --ip 192.168.103.2 --volume calico-094798:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 06:10:44.378476  304042 cli_runner.go:164] Run: docker container inspect calico-094798 --format={{.State.Running}}
	I1210 06:10:44.398439  304042 cli_runner.go:164] Run: docker container inspect calico-094798 --format={{.State.Status}}
	I1210 06:10:44.416764  304042 cli_runner.go:164] Run: docker exec calico-094798 stat /var/lib/dpkg/alternatives/iptables
	I1210 06:10:44.471779  304042 oci.go:144] the created container "calico-094798" has a running status.
	I1210 06:10:44.471807  304042 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22094-5725/.minikube/machines/calico-094798/id_rsa...
	I1210 06:10:44.620373  304042 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22094-5725/.minikube/machines/calico-094798/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 06:10:44.645911  304042 cli_runner.go:164] Run: docker container inspect calico-094798 --format={{.State.Status}}
	I1210 06:10:44.676511  304042 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 06:10:44.676536  304042 kic_runner.go:114] Args: [docker exec --privileged calico-094798 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 06:10:44.735673  304042 cli_runner.go:164] Run: docker container inspect calico-094798 --format={{.State.Status}}
	I1210 06:10:44.762065  304042 machine.go:94] provisionDockerMachine start ...
	I1210 06:10:44.762193  304042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-094798
	I1210 06:10:44.785616  304042 main.go:143] libmachine: Using SSH client type: native
	I1210 06:10:44.785875  304042 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1210 06:10:44.785899  304042 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:10:44.926970  304042 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-094798
	
	I1210 06:10:44.927004  304042 ubuntu.go:182] provisioning hostname "calico-094798"
	I1210 06:10:44.927105  304042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-094798
	I1210 06:10:44.950331  304042 main.go:143] libmachine: Using SSH client type: native
	I1210 06:10:44.950659  304042 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1210 06:10:44.950676  304042 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-094798 && echo "calico-094798" | sudo tee /etc/hostname
	I1210 06:10:45.104421  304042 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-094798
	
	I1210 06:10:45.104496  304042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-094798
	I1210 06:10:45.126913  304042 main.go:143] libmachine: Using SSH client type: native
	I1210 06:10:45.127138  304042 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1210 06:10:45.127156  304042 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-094798' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-094798/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-094798' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:10:45.274866  304042 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:10:45.274892  304042 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-5725/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-5725/.minikube}
	I1210 06:10:45.274925  304042 ubuntu.go:190] setting up certificates
	I1210 06:10:45.274937  304042 provision.go:84] configureAuth start
	I1210 06:10:45.275008  304042 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-094798
	I1210 06:10:45.294720  304042 provision.go:143] copyHostCerts
	I1210 06:10:45.294791  304042 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem, removing ...
	I1210 06:10:45.294804  304042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem
	I1210 06:10:45.294889  304042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem (1123 bytes)
	I1210 06:10:45.295006  304042 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem, removing ...
	I1210 06:10:45.295019  304042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem
	I1210 06:10:45.295062  304042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem (1679 bytes)
	I1210 06:10:45.295170  304042 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem, removing ...
	I1210 06:10:45.295181  304042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem
	I1210 06:10:45.295220  304042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem (1078 bytes)
	I1210 06:10:45.295306  304042 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem org=jenkins.calico-094798 san=[127.0.0.1 192.168.103.2 calico-094798 localhost minikube]
	I1210 06:10:45.311527  304042 provision.go:177] copyRemoteCerts
	I1210 06:10:45.311591  304042 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:10:45.311640  304042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-094798
	I1210 06:10:45.331393  304042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/calico-094798/id_rsa Username:docker}
	I1210 06:10:45.429975  304042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:10:45.451128  304042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 06:10:45.468449  304042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:10:45.484996  304042 provision.go:87] duration metric: took 210.038413ms to configureAuth
	I1210 06:10:45.485019  304042 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:10:45.485211  304042 config.go:182] Loaded profile config "calico-094798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:10:45.485317  304042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-094798
	I1210 06:10:45.504389  304042 main.go:143] libmachine: Using SSH client type: native
	I1210 06:10:45.504704  304042 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1210 06:10:45.504727  304042 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:10:45.803213  304042 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:10:45.803246  304042 machine.go:97] duration metric: took 1.041137813s to provisionDockerMachine
	I1210 06:10:45.803259  304042 client.go:176] duration metric: took 2.381622423s to LocalClient.Create
	I1210 06:10:45.803296  304042 start.go:167] duration metric: took 2.381697835s to libmachine.API.Create "calico-094798"
	I1210 06:10:45.803311  304042 start.go:293] postStartSetup for "calico-094798" (driver="docker")
	I1210 06:10:45.803329  304042 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:10:45.803397  304042 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:10:45.803449  304042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-094798
	I1210 06:10:45.826393  304042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/calico-094798/id_rsa Username:docker}
	I1210 06:10:45.929332  304042 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:10:45.933747  304042 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:10:45.933787  304042 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:10:45.933799  304042 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/addons for local assets ...
	I1210 06:10:45.933854  304042 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/files for local assets ...
	I1210 06:10:45.933944  304042 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem -> 92532.pem in /etc/ssl/certs
	I1210 06:10:45.934059  304042 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:10:45.942836  304042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:10:45.965040  304042 start.go:296] duration metric: took 161.709177ms for postStartSetup
	I1210 06:10:45.965489  304042 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-094798
	I1210 06:10:45.984828  304042 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/calico-094798/config.json ...
	I1210 06:10:45.985137  304042 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:10:45.985187  304042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-094798
	I1210 06:10:46.004324  304042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/calico-094798/id_rsa Username:docker}
	I1210 06:10:46.097960  304042 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:10:46.102658  304042 start.go:128] duration metric: took 2.683035841s to createHost
	I1210 06:10:46.102683  304042 start.go:83] releasing machines lock for "calico-094798", held for 2.683191529s
	I1210 06:10:46.102759  304042 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-094798
	I1210 06:10:46.121715  304042 ssh_runner.go:195] Run: cat /version.json
	I1210 06:10:46.121782  304042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-094798
	I1210 06:10:46.121836  304042 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:10:46.121912  304042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-094798
	I1210 06:10:46.147794  304042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/calico-094798/id_rsa Username:docker}
	I1210 06:10:46.148720  304042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/calico-094798/id_rsa Username:docker}
	I1210 06:10:46.307934  304042 ssh_runner.go:195] Run: systemctl --version
	I1210 06:10:46.314861  304042 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:10:46.346962  304042 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:10:46.351671  304042 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:10:46.351730  304042 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:10:46.379313  304042 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 06:10:46.379333  304042 start.go:496] detecting cgroup driver to use...
	I1210 06:10:46.379361  304042 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:10:46.379404  304042 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:10:46.400331  304042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:10:46.413428  304042 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:10:46.413478  304042 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:10:46.430559  304042 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:10:46.446845  304042 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:10:46.537721  304042 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:10:46.672740  304042 docker.go:234] disabling docker service ...
	I1210 06:10:46.672802  304042 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:10:46.710487  304042 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:10:46.737474  304042 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:10:46.865650  304042 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:10:46.984780  304042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:10:47.002560  304042 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:10:47.022612  304042 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:10:47.200663  304042 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:10:47.200725  304042 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:47.344695  304042 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:10:47.344767  304042 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:47.356349  304042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:47.366907  304042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:47.377751  304042 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:10:47.389305  304042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:47.401811  304042 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:47.423304  304042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:47.435210  304042 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:10:47.445289  304042 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:10:47.455556  304042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:10:47.563129  304042 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:10:47.730797  304042 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:10:47.730864  304042 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:10:47.735209  304042 start.go:564] Will wait 60s for crictl version
	I1210 06:10:47.735257  304042 ssh_runner.go:195] Run: which crictl
	I1210 06:10:47.739329  304042 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:10:47.766455  304042 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:10:47.766538  304042 ssh_runner.go:195] Run: crio --version
	I1210 06:10:47.797025  304042 ssh_runner.go:195] Run: crio --version
	I1210 06:10:47.835590  304042 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1210 06:10:47.837187  304042 cli_runner.go:164] Run: docker network inspect calico-094798 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:10:47.855950  304042 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1210 06:10:47.859950  304042 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:10:47.873420  304042 kubeadm.go:884] updating cluster {Name:calico-094798 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:calico-094798 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:10:47.873636  304042 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:10:48.017869  304042 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	
	
	==> CRI-O <==
	Dec 10 06:10:41 pause-257171 crio[3291]: time="2025-12-10T06:10:41.919902072Z" level=info msg="RDT not available in the host system"
	Dec 10 06:10:41 pause-257171 crio[3291]: time="2025-12-10T06:10:41.919917491Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 10 06:10:41 pause-257171 crio[3291]: time="2025-12-10T06:10:41.92079743Z" level=info msg="Conmon does support the --sync option"
	Dec 10 06:10:41 pause-257171 crio[3291]: time="2025-12-10T06:10:41.920819729Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 10 06:10:41 pause-257171 crio[3291]: time="2025-12-10T06:10:41.920836201Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 10 06:10:41 pause-257171 crio[3291]: time="2025-12-10T06:10:41.922364334Z" level=info msg="Conmon does support the --sync option"
	Dec 10 06:10:41 pause-257171 crio[3291]: time="2025-12-10T06:10:41.922756344Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 10 06:10:41 pause-257171 crio[3291]: time="2025-12-10T06:10:41.928276624Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 06:10:41 pause-257171 crio[3291]: time="2025-12-10T06:10:41.928296986Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 06:10:41 pause-257171 crio[3291]: time="2025-12-10T06:10:41.929036174Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 10 06:10:41 pause-257171 crio[3291]: time="2025-12-10T06:10:41.929544392Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 10 06:10:41 pause-257171 crio[3291]: time="2025-12-10T06:10:41.929602758Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 10 06:10:42 pause-257171 crio[3291]: time="2025-12-10T06:10:42.017412053Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-t6x5x Namespace:kube-system ID:4b569e2c432bd7fcdf352a86eea5724968b9eca534f130dd5a643dc5b6f23e37 UID:b893f947-02d7-41b1-9886-9b0830ddf69c NetNS:/var/run/netns/8c041cec-d674-4da3-8914-4b8df975afe8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005162f8}] Aliases:map[]}"
	Dec 10 06:10:42 pause-257171 crio[3291]: time="2025-12-10T06:10:42.017802837Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-t6x5x for CNI network kindnet (type=ptp)"
	Dec 10 06:10:42 pause-257171 crio[3291]: time="2025-12-10T06:10:42.018417287Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 10 06:10:42 pause-257171 crio[3291]: time="2025-12-10T06:10:42.018440189Z" level=info msg="Starting seccomp notifier watcher"
	Dec 10 06:10:42 pause-257171 crio[3291]: time="2025-12-10T06:10:42.018499423Z" level=info msg="Create NRI interface"
	Dec 10 06:10:42 pause-257171 crio[3291]: time="2025-12-10T06:10:42.018609414Z" level=info msg="built-in NRI default validator is disabled"
	Dec 10 06:10:42 pause-257171 crio[3291]: time="2025-12-10T06:10:42.018622342Z" level=info msg="runtime interface created"
	Dec 10 06:10:42 pause-257171 crio[3291]: time="2025-12-10T06:10:42.018636271Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 10 06:10:42 pause-257171 crio[3291]: time="2025-12-10T06:10:42.018644485Z" level=info msg="runtime interface starting up..."
	Dec 10 06:10:42 pause-257171 crio[3291]: time="2025-12-10T06:10:42.018652339Z" level=info msg="starting plugins..."
	Dec 10 06:10:42 pause-257171 crio[3291]: time="2025-12-10T06:10:42.018667611Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 06:10:42 pause-257171 crio[3291]: time="2025-12-10T06:10:42.019001577Z" level=info msg="No systemd watchdog enabled"
	Dec 10 06:10:42 pause-257171 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	66ade9741737d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                     14 seconds ago      Running             coredns                   0                   4b569e2c432bd       coredns-66bc5c9577-t6x5x               kube-system
	f35e842647561       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11   25 seconds ago      Running             kindnet-cni               0                   694332fbe1d24       kindnet-8nqff                          kube-system
	a8135c67f5495       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                     27 seconds ago      Running             kube-proxy                0                   3334f8496ef79       kube-proxy-hd5t7                       kube-system
	107fccc521147       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                     38 seconds ago      Running             kube-controller-manager   0                   4998f7e82ae80       kube-controller-manager-pause-257171   kube-system
	0f33d39c61905       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                     38 seconds ago      Running             kube-apiserver            0                   8dd1f2c35c493       kube-apiserver-pause-257171            kube-system
	deaa3b6fcb814       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                     38 seconds ago      Running             etcd                      0                   e5a3641b6f408       etcd-pause-257171                      kube-system
	8dddd76371ac0       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                     38 seconds ago      Running             kube-scheduler            0                   7e27883dc9703       kube-scheduler-pause-257171            kube-system
	
	
	==> coredns [66ade9741737d0618d31b9e331c6d038cdc1e2bb1fd529e9541980bf68e0abec] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33250 - 34534 "HINFO IN 6074135922786494092.6906691045981847379. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.088699836s
	
	
	==> describe nodes <==
	Name:               pause-257171
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-257171
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602
	                    minikube.k8s.io/name=pause-257171
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_10_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:10:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-257171
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:10:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:10:46 +0000   Wed, 10 Dec 2025 06:10:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:10:46 +0000   Wed, 10 Dec 2025 06:10:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:10:46 +0000   Wed, 10 Dec 2025 06:10:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 06:10:46 +0000   Wed, 10 Dec 2025 06:10:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-257171
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                ed004dca-6a51-4a18-8dcd-3ac4d151217f
	  Boot ID:                    b1b789e7-29ca-41f0-9541-8c4ef16372aa
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-t6x5x                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-pause-257171                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-8nqff                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-pause-257171             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-pause-257171    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-hd5t7                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-pause-257171             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 33s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  33s   kubelet          Node pause-257171 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s   kubelet          Node pause-257171 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s   kubelet          Node pause-257171 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s   node-controller  Node pause-257171 event: Registered Node pause-257171 in Controller
	  Normal  NodeReady                14s   kubelet          Node pause-257171 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.085783] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023769] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.147072] kauditd_printk_skb: 47 callbacks suppressed
	[Dec10 05:30] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 42 ae 2b 34 45 8c c6 70 9d 75 0f 8b 08 00
	[  +1.051409] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 42 ae 2b 34 45 8c c6 70 9d 75 0f 8b 08 00
	[Dec10 05:31] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 42 ae 2b 34 45 8c c6 70 9d 75 0f 8b 08 00
	[  +1.023880] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 42 ae 2b 34 45 8c c6 70 9d 75 0f 8b 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 42 ae 2b 34 45 8c c6 70 9d 75 0f 8b 08 00
	[  +1.023884] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 42 ae 2b 34 45 8c c6 70 9d 75 0f 8b 08 00
	[  +2.047781] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 42 ae 2b 34 45 8c c6 70 9d 75 0f 8b 08 00
	[  +4.031549] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 42 ae 2b 34 45 8c c6 70 9d 75 0f 8b 08 00
	[  +8.447180] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 42 ae 2b 34 45 8c c6 70 9d 75 0f 8b 08 00
	[ +16.382295] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 42 ae 2b 34 45 8c c6 70 9d 75 0f 8b 08 00
	[Dec10 05:32] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 42 ae 2b 34 45 8c c6 70 9d 75 0f 8b 08 00
	
	
	==> etcd [deaa3b6fcb814994f944b5b7e7ec3daa03eee3377299d9707ee55f5419d0fefe] <==
	{"level":"warn","ts":"2025-12-10T06:10:11.987628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:11.996345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.003548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.010045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.017382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.023654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.030812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.037174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.044979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.060200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.067025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.074056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.081500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.088774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.096346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.103382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.109854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.116397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.122874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.129938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.143293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.150914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48896","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:48896: read: connection reset by peer"}
	{"level":"warn","ts":"2025-12-10T06:10:12.159375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.167866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.208657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48968","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 06:10:48 up 53 min,  0 user,  load average: 5.41, 3.05, 1.99
	Linux pause-257171 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f35e84264756175001d4f7ecb61402e5f125b72604a043a8c128a061d528b9fd] <==
	I1210 06:10:23.593881       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 06:10:23.594234       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1210 06:10:23.594389       1 main.go:148] setting mtu 1500 for CNI 
	I1210 06:10:23.594404       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 06:10:23.594430       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T06:10:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 06:10:23.794303       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 06:10:23.794334       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 06:10:23.794349       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 06:10:23.794613       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 06:10:24.094841       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 06:10:24.094873       1 metrics.go:72] Registering metrics
	I1210 06:10:24.094946       1 controller.go:711] "Syncing nftables rules"
	I1210 06:10:33.798498       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 06:10:33.798560       1 main.go:301] handling current node
	I1210 06:10:43.802193       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 06:10:43.802229       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0f33d39c6190527f6d0e9ac7647ac9706f3280f63c52c18b858fc59065309e3e] <==
	I1210 06:10:12.715602       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1210 06:10:12.717134       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1210 06:10:12.717189       1 aggregator.go:171] initial CRD sync complete...
	I1210 06:10:12.717210       1 autoregister_controller.go:144] Starting autoregister controller
	I1210 06:10:12.717218       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 06:10:12.717225       1 cache.go:39] Caches are synced for autoregister controller
	I1210 06:10:12.741564       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:10:12.749466       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1210 06:10:13.608121       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1210 06:10:13.613888       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1210 06:10:13.613910       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 06:10:14.129947       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:10:14.177552       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:10:14.308430       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1210 06:10:14.313739       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1210 06:10:14.314625       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 06:10:14.318141       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:10:14.623652       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 06:10:15.373048       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 06:10:15.383236       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1210 06:10:15.390304       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1210 06:10:19.828969       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:10:19.833012       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:10:20.327629       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 06:10:20.725297       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [107fccc52114724b5c7829573ed47387d6cbba2579d253522d27aec12a9ce2af] <==
	I1210 06:10:19.624572       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1210 06:10:19.624590       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1210 06:10:19.624634       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1210 06:10:19.624660       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1210 06:10:19.624675       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1210 06:10:19.624691       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1210 06:10:19.624705       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1210 06:10:19.624861       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1210 06:10:19.626038       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1210 06:10:19.626059       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1210 06:10:19.626063       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1210 06:10:19.629519       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1210 06:10:19.629522       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 06:10:19.629541       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1210 06:10:19.629548       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1210 06:10:19.629585       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1210 06:10:19.629610       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1210 06:10:19.629617       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1210 06:10:19.629621       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1210 06:10:19.629735       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 06:10:19.630984       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 06:10:19.631051       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1210 06:10:19.643805       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-257171" podCIDRs=["10.244.0.0/24"]
	I1210 06:10:19.648096       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 06:10:34.576386       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a8135c67f54958dd4233b61e0d015aa8778759365567de664bda3e2ba8db00ab] <==
	I1210 06:10:21.136612       1 server_linux.go:53] "Using iptables proxy"
	I1210 06:10:21.234060       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 06:10:21.334562       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 06:10:21.334598       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1210 06:10:21.334692       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:10:21.353307       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:10:21.353365       1 server_linux.go:132] "Using iptables Proxier"
	I1210 06:10:21.358546       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:10:21.358882       1 server.go:527] "Version info" version="v1.34.3"
	I1210 06:10:21.358907       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:10:21.360969       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:10:21.361001       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:10:21.361126       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:10:21.361149       1 config.go:200] "Starting service config controller"
	I1210 06:10:21.361163       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:10:21.361170       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:10:21.361206       1 config.go:309] "Starting node config controller"
	I1210 06:10:21.361223       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:10:21.361231       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:10:21.462235       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 06:10:21.462266       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 06:10:21.462290       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8dddd76371ac09675b74acb4cc1233f21c564c3ec61620eb5510f2aa62d0fd76] <==
	E1210 06:10:12.676419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 06:10:12.676540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 06:10:12.676633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 06:10:12.676647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 06:10:12.676667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 06:10:12.676729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 06:10:12.676741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 06:10:12.676826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 06:10:12.676842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 06:10:12.676868       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 06:10:12.677010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 06:10:12.677228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 06:10:13.549250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 06:10:13.571591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 06:10:13.612706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 06:10:13.617779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 06:10:13.658906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 06:10:13.679678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 06:10:13.714883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 06:10:13.737959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 06:10:13.747210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 06:10:13.816841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 06:10:13.874781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1210 06:10:13.907908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1210 06:10:16.971722       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 06:10:20 pause-257171 kubelet[2360]: I1210 06:10:20.849977    2360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/afb8ed20-85e5-48ca-9b80-aba3e0f6e330-lib-modules\") pod \"kindnet-8nqff\" (UID: \"afb8ed20-85e5-48ca-9b80-aba3e0f6e330\") " pod="kube-system/kindnet-8nqff"
	Dec 10 06:10:20 pause-257171 kubelet[2360]: I1210 06:10:20.850014    2360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6m27\" (UniqueName: \"kubernetes.io/projected/5c7c8775-6a41-44e3-b6b1-7a6a2b4c4942-kube-api-access-w6m27\") pod \"kube-proxy-hd5t7\" (UID: \"5c7c8775-6a41-44e3-b6b1-7a6a2b4c4942\") " pod="kube-system/kube-proxy-hd5t7"
	Dec 10 06:10:20 pause-257171 kubelet[2360]: I1210 06:10:20.850047    2360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8m6c\" (UniqueName: \"kubernetes.io/projected/afb8ed20-85e5-48ca-9b80-aba3e0f6e330-kube-api-access-s8m6c\") pod \"kindnet-8nqff\" (UID: \"afb8ed20-85e5-48ca-9b80-aba3e0f6e330\") " pod="kube-system/kindnet-8nqff"
	Dec 10 06:10:20 pause-257171 kubelet[2360]: I1210 06:10:20.850072    2360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5c7c8775-6a41-44e3-b6b1-7a6a2b4c4942-kube-proxy\") pod \"kube-proxy-hd5t7\" (UID: \"5c7c8775-6a41-44e3-b6b1-7a6a2b4c4942\") " pod="kube-system/kube-proxy-hd5t7"
	Dec 10 06:10:20 pause-257171 kubelet[2360]: I1210 06:10:20.850122    2360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c7c8775-6a41-44e3-b6b1-7a6a2b4c4942-lib-modules\") pod \"kube-proxy-hd5t7\" (UID: \"5c7c8775-6a41-44e3-b6b1-7a6a2b4c4942\") " pod="kube-system/kube-proxy-hd5t7"
	Dec 10 06:10:20 pause-257171 kubelet[2360]: I1210 06:10:20.850147    2360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/afb8ed20-85e5-48ca-9b80-aba3e0f6e330-cni-cfg\") pod \"kindnet-8nqff\" (UID: \"afb8ed20-85e5-48ca-9b80-aba3e0f6e330\") " pod="kube-system/kindnet-8nqff"
	Dec 10 06:10:22 pause-257171 kubelet[2360]: I1210 06:10:22.120397    2360 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hd5t7" podStartSLOduration=2.1203528 podStartE2EDuration="2.1203528s" podCreationTimestamp="2025-12-10 06:10:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:10:21.272501169 +0000 UTC m=+6.151689146" watchObservedRunningTime="2025-12-10 06:10:22.1203528 +0000 UTC m=+6.999540777"
	Dec 10 06:10:24 pause-257171 kubelet[2360]: I1210 06:10:24.278435    2360 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-8nqff" podStartSLOduration=2.076586807 podStartE2EDuration="4.278418821s" podCreationTimestamp="2025-12-10 06:10:20 +0000 UTC" firstStartedPulling="2025-12-10 06:10:21.057017372 +0000 UTC m=+5.936205343" lastFinishedPulling="2025-12-10 06:10:23.258849401 +0000 UTC m=+8.138037357" observedRunningTime="2025-12-10 06:10:24.278216079 +0000 UTC m=+9.157404055" watchObservedRunningTime="2025-12-10 06:10:24.278418821 +0000 UTC m=+9.157606797"
	Dec 10 06:10:34 pause-257171 kubelet[2360]: I1210 06:10:34.365662    2360 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 10 06:10:34 pause-257171 kubelet[2360]: I1210 06:10:34.445848    2360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b893f947-02d7-41b1-9886-9b0830ddf69c-config-volume\") pod \"coredns-66bc5c9577-t6x5x\" (UID: \"b893f947-02d7-41b1-9886-9b0830ddf69c\") " pod="kube-system/coredns-66bc5c9577-t6x5x"
	Dec 10 06:10:34 pause-257171 kubelet[2360]: I1210 06:10:34.446046    2360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt256\" (UniqueName: \"kubernetes.io/projected/b893f947-02d7-41b1-9886-9b0830ddf69c-kube-api-access-lt256\") pod \"coredns-66bc5c9577-t6x5x\" (UID: \"b893f947-02d7-41b1-9886-9b0830ddf69c\") " pod="kube-system/coredns-66bc5c9577-t6x5x"
	Dec 10 06:10:35 pause-257171 kubelet[2360]: I1210 06:10:35.317406    2360 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-t6x5x" podStartSLOduration=15.317387625 podStartE2EDuration="15.317387625s" podCreationTimestamp="2025-12-10 06:10:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:10:35.310261489 +0000 UTC m=+20.189449466" watchObservedRunningTime="2025-12-10 06:10:35.317387625 +0000 UTC m=+20.196575602"
	Dec 10 06:10:39 pause-257171 kubelet[2360]: W1210 06:10:39.233314    2360 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 10 06:10:39 pause-257171 kubelet[2360]: E1210 06:10:39.233421    2360 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Dec 10 06:10:39 pause-257171 kubelet[2360]: E1210 06:10:39.233522    2360 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 10 06:10:39 pause-257171 kubelet[2360]: E1210 06:10:39.233545    2360 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 10 06:10:39 pause-257171 kubelet[2360]: E1210 06:10:39.233562    2360 kubelet.go:2614] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 10 06:10:39 pause-257171 kubelet[2360]: E1210 06:10:39.301093    2360 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 10 06:10:39 pause-257171 kubelet[2360]: E1210 06:10:39.301151    2360 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 10 06:10:39 pause-257171 kubelet[2360]: E1210 06:10:39.301167    2360 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 10 06:10:39 pause-257171 kubelet[2360]: W1210 06:10:39.334369    2360 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 10 06:10:46 pause-257171 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 06:10:46 pause-257171 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 06:10:46 pause-257171 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:10:46 pause-257171 systemd[1]: kubelet.service: Consumed 1.291s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-257171 -n pause-257171
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-257171 -n pause-257171: exit status 2 (447.723729ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-257171 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-257171
helpers_test.go:244: (dbg) docker inspect pause-257171:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "93be872b4015c629f55bf45ebefb4592f711820778368fef4cbafa09515cd1eb",
	        "Created": "2025-12-10T06:09:52.796235718Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 288929,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:09:52.824493444Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/93be872b4015c629f55bf45ebefb4592f711820778368fef4cbafa09515cd1eb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/93be872b4015c629f55bf45ebefb4592f711820778368fef4cbafa09515cd1eb/hostname",
	        "HostsPath": "/var/lib/docker/containers/93be872b4015c629f55bf45ebefb4592f711820778368fef4cbafa09515cd1eb/hosts",
	        "LogPath": "/var/lib/docker/containers/93be872b4015c629f55bf45ebefb4592f711820778368fef4cbafa09515cd1eb/93be872b4015c629f55bf45ebefb4592f711820778368fef4cbafa09515cd1eb-json.log",
	        "Name": "/pause-257171",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-257171:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-257171",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "93be872b4015c629f55bf45ebefb4592f711820778368fef4cbafa09515cd1eb",
	                "LowerDir": "/var/lib/docker/overlay2/f2a683995b3b2ca11bd33ed8e07ab9d4752c713f7ef23d0f3e73756731530cdc-init/diff:/var/lib/docker/overlay2/b62e2f8db4877fd6b32453256d2aeab173581bfdfbed6c87a5c3b6dd49dbb983/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2a683995b3b2ca11bd33ed8e07ab9d4752c713f7ef23d0f3e73756731530cdc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2a683995b3b2ca11bd33ed8e07ab9d4752c713f7ef23d0f3e73756731530cdc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2a683995b3b2ca11bd33ed8e07ab9d4752c713f7ef23d0f3e73756731530cdc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-257171",
	                "Source": "/var/lib/docker/volumes/pause-257171/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-257171",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-257171",
	                "name.minikube.sigs.k8s.io": "pause-257171",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3a31e798092a8ca37a84b10bde16f09c3bb95260da866b78352d49d559a705e6",
	            "SandboxKey": "/var/run/docker/netns/3a31e798092a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-257171": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0397b6bc6aea8076e07511fb3421bf43e750217f467f117f9cb843e5fc24d81f",
	                    "EndpointID": "e46b9ae2c1fc1623ceab885d7e3d571bec114ebc25b1bc72379aa386ff90c087",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "ce:ce:0e:46:75:19",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-257171",
	                        "93be872b4015"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-257171 -n pause-257171
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-257171 -n pause-257171: exit status 2 (439.922257ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-257171 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-257171 logs -n 25: (1.261094451s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-094798 sudo systemctl status crio --all --full --no-pager                                                                                                                                                       │ cilium-094798             │ jenkins │ v1.37.0 │ 10 Dec 25 06:06 UTC │                     │
	│ ssh     │ -p cilium-094798 sudo systemctl cat crio --no-pager                                                                                                                                                                       │ cilium-094798             │ jenkins │ v1.37.0 │ 10 Dec 25 06:06 UTC │                     │
	│ ssh     │ -p cilium-094798 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                             │ cilium-094798             │ jenkins │ v1.37.0 │ 10 Dec 25 06:06 UTC │                     │
	│ ssh     │ -p cilium-094798 sudo crio config                                                                                                                                                                                         │ cilium-094798             │ jenkins │ v1.37.0 │ 10 Dec 25 06:06 UTC │                     │
	│ delete  │ -p cilium-094798                                                                                                                                                                                                          │ cilium-094798             │ jenkins │ v1.37.0 │ 10 Dec 25 06:06 UTC │ 10 Dec 25 06:06 UTC │
	│ start   │ -p force-systemd-env-872487 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                │ force-systemd-env-872487  │ jenkins │ v1.37.0 │ 10 Dec 25 06:06 UTC │ 10 Dec 25 06:07 UTC │
	│ delete  │ -p force-systemd-env-872487                                                                                                                                                                                               │ force-systemd-env-872487  │ jenkins │ v1.37.0 │ 10 Dec 25 06:07 UTC │ 10 Dec 25 06:07 UTC │
	│ start   │ -p cert-expiration-790790 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                    │ cert-expiration-790790    │ jenkins │ v1.37.0 │ 10 Dec 25 06:07 UTC │ 10 Dec 25 06:07 UTC │
	│ start   │ -p kubernetes-upgrade-196025 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                         │ kubernetes-upgrade-196025 │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │                     │
	│ start   │ -p kubernetes-upgrade-196025 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                             │ kubernetes-upgrade-196025 │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ delete  │ -p kubernetes-upgrade-196025                                                                                                                                                                                              │ kubernetes-upgrade-196025 │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ start   │ -p cert-options-357277 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio │ cert-options-357277       │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ ssh     │ cert-options-357277 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                               │ cert-options-357277       │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ ssh     │ -p cert-options-357277 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                             │ cert-options-357277       │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ delete  │ -p cert-options-357277                                                                                                                                                                                                    │ cert-options-357277       │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ start   │ -p pause-257171 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                 │ pause-257171              │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:10 UTC │
	│ delete  │ -p stopped-upgrade-616121                                                                                                                                                                                                 │ stopped-upgrade-616121    │ jenkins │ v1.37.0 │ 10 Dec 25 06:10 UTC │ 10 Dec 25 06:10 UTC │
	│ start   │ -p auto-094798 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                   │ auto-094798               │ jenkins │ v1.37.0 │ 10 Dec 25 06:10 UTC │                     │
	│ start   │ -p cert-expiration-790790 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                 │ cert-expiration-790790    │ jenkins │ v1.37.0 │ 10 Dec 25 06:10 UTC │ 10 Dec 25 06:10 UTC │
	│ start   │ -p pause-257171 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                          │ pause-257171              │ jenkins │ v1.37.0 │ 10 Dec 25 06:10 UTC │ 10 Dec 25 06:10 UTC │
	│ delete  │ -p cert-expiration-790790                                                                                                                                                                                                 │ cert-expiration-790790    │ jenkins │ v1.37.0 │ 10 Dec 25 06:10 UTC │ 10 Dec 25 06:10 UTC │
	│ start   │ -p kindnet-094798 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                  │ kindnet-094798            │ jenkins │ v1.37.0 │ 10 Dec 25 06:10 UTC │                     │
	│ delete  │ -p running-upgrade-897548                                                                                                                                                                                                 │ running-upgrade-897548    │ jenkins │ v1.37.0 │ 10 Dec 25 06:10 UTC │ 10 Dec 25 06:10 UTC │
	│ start   │ -p calico-094798 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                                                                                    │ calico-094798             │ jenkins │ v1.37.0 │ 10 Dec 25 06:10 UTC │                     │
	│ pause   │ -p pause-257171 --alsologtostderr -v=5                                                                                                                                                                                    │ pause-257171              │ jenkins │ v1.37.0 │ 10 Dec 25 06:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:10:43
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:10:43.100639  304042 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:10:43.101008  304042 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:10:43.101018  304042 out.go:374] Setting ErrFile to fd 2...
	I1210 06:10:43.101024  304042 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:10:43.101312  304042 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 06:10:43.101864  304042 out.go:368] Setting JSON to false
	I1210 06:10:43.103445  304042 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3187,"bootTime":1765343856,"procs":425,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:10:43.103525  304042 start.go:143] virtualization: kvm guest
	I1210 06:10:43.105741  304042 out.go:179] * [calico-094798] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:10:43.106996  304042 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:10:43.107006  304042 notify.go:221] Checking for updates...
	I1210 06:10:43.109352  304042 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:10:43.111837  304042 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:10:43.116375  304042 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube
	I1210 06:10:43.117680  304042 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:10:43.118770  304042 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:10:43.120543  304042 config.go:182] Loaded profile config "auto-094798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:10:43.120712  304042 config.go:182] Loaded profile config "kindnet-094798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:10:43.120905  304042 config.go:182] Loaded profile config "pause-257171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:10:43.121049  304042 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:10:43.149931  304042 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:10:43.150065  304042 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:10:43.224585  304042 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:false NGoroutines:86 SystemTime:2025-12-10 06:10:43.213492096 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:10:43.224693  304042 docker.go:319] overlay module found
	I1210 06:10:43.230445  304042 out.go:179] * Using the docker driver based on user configuration
	I1210 06:10:43.232279  304042 start.go:309] selected driver: docker
	I1210 06:10:43.232290  304042 start.go:927] validating driver "docker" against <nil>
	I1210 06:10:43.232304  304042 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:10:43.232941  304042 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:10:43.298031  304042 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:false NGoroutines:86 SystemTime:2025-12-10 06:10:43.28651193 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:10:43.298261  304042 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 06:10:43.298552  304042 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:10:43.300190  304042 out.go:179] * Using Docker driver with root privileges
	I1210 06:10:43.301339  304042 cni.go:84] Creating CNI manager for "calico"
	I1210 06:10:43.301361  304042 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1210 06:10:43.301444  304042 start.go:353] cluster config:
	{Name:calico-094798 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:calico-094798 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:10:43.302698  304042 out.go:179] * Starting "calico-094798" primary control-plane node in "calico-094798" cluster
	I1210 06:10:43.304333  304042 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:10:43.306523  304042 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:10:43.307502  304042 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 06:10:43.307604  304042 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 06:10:43.333694  304042 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:10:43.333722  304042 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 06:10:43.335295  304042 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1210 06:10:43.418981  304042 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1210 06:10:43.419139  304042 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/calico-094798/config.json ...
	I1210 06:10:43.419190  304042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/calico-094798/config.json: {Name:mk2284af8bfa69aea07c58a07e318e3ef2d6a29f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:10:43.419261  304042 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:10:43.419361  304042 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:10:43.419410  304042 start.go:360] acquireMachinesLock for calico-094798: {Name:mk11609186fd3775863e23d3c3f6cd14ef0616fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:10:43.419479  304042 start.go:364] duration metric: took 46.952µs to acquireMachinesLock for "calico-094798"
	I1210 06:10:43.419501  304042 start.go:93] Provisioning new machine with config: &{Name:calico-094798 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:calico-094798 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:10:43.419609  304042 start.go:125] createHost starting for "" (driver="docker")
	I1210 06:10:42.146840  300941 cli_runner.go:164] Run: docker network inspect pause-257171 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:10:42.167292  300941 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 06:10:42.171675  300941 kubeadm.go:884] updating cluster {Name:pause-257171 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-257171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:10:42.172063  300941 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:10:42.387491  300941 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:10:42.529985  300941 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:10:42.682189  300941 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 06:10:42.682260  300941 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:10:42.716579  300941 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:10:42.716600  300941 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:10:42.716608  300941 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.3 crio true true} ...
	I1210 06:10:42.716732  300941 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-257171 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:pause-257171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:10:42.716814  300941 ssh_runner.go:195] Run: crio config
	I1210 06:10:42.776935  300941 cni.go:84] Creating CNI manager for ""
	I1210 06:10:42.776962  300941 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:10:42.776982  300941 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:10:42.777012  300941 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-257171 NodeName:pause-257171 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:10:42.777205  300941 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-257171"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:10:42.777278  300941 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1210 06:10:42.786926  300941 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:10:42.787005  300941 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:10:42.800455  300941 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1210 06:10:42.825837  300941 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 06:10:42.848825  300941 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1210 06:10:42.880752  300941 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:10:42.886976  300941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:10:43.033982  300941 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:10:43.048691  300941 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/pause-257171 for IP: 192.168.76.2
	I1210 06:10:43.048713  300941 certs.go:195] generating shared ca certs ...
	I1210 06:10:43.048731  300941 certs.go:227] acquiring lock for ca certs: {Name:mka90f54d579d39a8508aa46a6cef002ccad5d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:10:43.048889  300941 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key
	I1210 06:10:43.048950  300941 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key
	I1210 06:10:43.048963  300941 certs.go:257] generating profile certs ...
	I1210 06:10:43.049093  300941 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/pause-257171/client.key
	I1210 06:10:43.049175  300941 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/pause-257171/apiserver.key.49fe122d
	I1210 06:10:43.049238  300941 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/pause-257171/proxy-client.key
	I1210 06:10:43.049379  300941 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253.pem (1338 bytes)
	W1210 06:10:43.049422  300941 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253_empty.pem, impossibly tiny 0 bytes
	I1210 06:10:43.049436  300941 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:10:43.049476  300941 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:10:43.049511  300941 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:10:43.049551  300941 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem (1679 bytes)
	I1210 06:10:43.049613  300941 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:10:43.050464  300941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:10:43.068614  300941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:10:43.092546  300941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:10:43.116372  300941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:10:43.139733  300941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/pause-257171/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 06:10:43.160779  300941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/pause-257171/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:10:43.187280  300941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/pause-257171/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:10:43.211991  300941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/pause-257171/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 06:10:43.232266  300941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /usr/share/ca-certificates/92532.pem (1708 bytes)
	I1210 06:10:43.250936  300941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:10:43.272678  300941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253.pem --> /usr/share/ca-certificates/9253.pem (1338 bytes)
	I1210 06:10:43.295129  300941 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:10:43.309943  300941 ssh_runner.go:195] Run: openssl version
	I1210 06:10:43.316756  300941 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/92532.pem
	I1210 06:10:43.324333  300941 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/92532.pem /etc/ssl/certs/92532.pem
	I1210 06:10:43.332538  300941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92532.pem
	I1210 06:10:43.336334  300941 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:37 /usr/share/ca-certificates/92532.pem
	I1210 06:10:43.336377  300941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92532.pem
	I1210 06:10:43.373442  300941 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:10:43.381417  300941 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:10:43.389224  300941 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:10:43.397735  300941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:10:43.402046  300941 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:10:43.402111  300941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:10:43.442306  300941 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:10:43.452375  300941 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9253.pem
	I1210 06:10:43.462434  300941 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9253.pem /etc/ssl/certs/9253.pem
	I1210 06:10:43.471617  300941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9253.pem
	I1210 06:10:43.476749  300941 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:37 /usr/share/ca-certificates/9253.pem
	I1210 06:10:43.476807  300941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9253.pem
	I1210 06:10:43.526857  300941 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:10:43.534947  300941 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:10:43.538927  300941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:10:43.581137  300941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:10:43.632644  300941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:10:43.673982  300941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:10:43.711699  300941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:10:43.754802  300941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:10:43.804206  300941 kubeadm.go:401] StartCluster: {Name:pause-257171 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-257171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:10:43.804343  300941 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:10:43.804396  300941 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:10:43.834614  300941 cri.go:89] found id: "66ade9741737d0618d31b9e331c6d038cdc1e2bb1fd529e9541980bf68e0abec"
	I1210 06:10:43.834637  300941 cri.go:89] found id: "f35e84264756175001d4f7ecb61402e5f125b72604a043a8c128a061d528b9fd"
	I1210 06:10:43.834643  300941 cri.go:89] found id: "a8135c67f54958dd4233b61e0d015aa8778759365567de664bda3e2ba8db00ab"
	I1210 06:10:43.834647  300941 cri.go:89] found id: "107fccc52114724b5c7829573ed47387d6cbba2579d253522d27aec12a9ce2af"
	I1210 06:10:43.834651  300941 cri.go:89] found id: "0f33d39c6190527f6d0e9ac7647ac9706f3280f63c52c18b858fc59065309e3e"
	I1210 06:10:43.834662  300941 cri.go:89] found id: "deaa3b6fcb814994f944b5b7e7ec3daa03eee3377299d9707ee55f5419d0fefe"
	I1210 06:10:43.834666  300941 cri.go:89] found id: "8dddd76371ac09675b74acb4cc1233f21c564c3ec61620eb5510f2aa62d0fd76"
	I1210 06:10:43.834673  300941 cri.go:89] found id: ""
	I1210 06:10:43.834715  300941 ssh_runner.go:195] Run: sudo runc list -f json
	W1210 06:10:43.847718  300941 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:10:43Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:10:43.847793  300941 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:10:43.857915  300941 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:10:43.857934  300941 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:10:43.857979  300941 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:10:43.870726  300941 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:10:43.871357  300941 kubeconfig.go:125] found "pause-257171" server: "https://192.168.76.2:8443"
	I1210 06:10:43.872154  300941 kapi.go:59] client config for pause-257171: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22094-5725/.minikube/profiles/pause-257171/client.crt", KeyFile:"/home/jenkins/minikube-integration/22094-5725/.minikube/profiles/pause-257171/client.key", CAFile:"/home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 06:10:43.872726  300941 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 06:10:43.872745  300941 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 06:10:43.872752  300941 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 06:10:43.872758  300941 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 06:10:43.872763  300941 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 06:10:43.873205  300941 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:10:43.881790  300941 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1210 06:10:43.881824  300941 kubeadm.go:602] duration metric: took 23.883302ms to restartPrimaryControlPlane
	I1210 06:10:43.881835  300941 kubeadm.go:403] duration metric: took 77.63898ms to StartCluster
	I1210 06:10:43.881853  300941 settings.go:142] acquiring lock: {Name:mk8c38e27b37253ca8cb2a2adf6342f0db270902 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:10:43.881925  300941 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:10:43.882513  300941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/kubeconfig: {Name:mkfa60e97179780abb80cddf99075aed36301884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:10:43.882745  300941 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:10:43.882870  300941 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:10:43.882989  300941 config.go:182] Loaded profile config "pause-257171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:10:43.884376  300941 out.go:179] * Verifying Kubernetes components...
	I1210 06:10:43.884374  300941 out.go:179] * Enabled addons: 
	I1210 06:10:40.872976  302200 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:10:40.874514  302200 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 06:10:40.874766  302200 start.go:159] libmachine.API.Create for "kindnet-094798" (driver="docker")
	I1210 06:10:40.874803  302200 client.go:173] LocalClient.Create starting
	I1210 06:10:40.874905  302200 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem
	I1210 06:10:40.874946  302200 main.go:143] libmachine: Decoding PEM data...
	I1210 06:10:40.874971  302200 main.go:143] libmachine: Parsing certificate...
	I1210 06:10:40.875029  302200 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem
	I1210 06:10:40.875055  302200 main.go:143] libmachine: Decoding PEM data...
	I1210 06:10:40.875069  302200 main.go:143] libmachine: Parsing certificate...
	I1210 06:10:40.875529  302200 cli_runner.go:164] Run: docker network inspect kindnet-094798 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 06:10:40.899685  302200 cli_runner.go:211] docker network inspect kindnet-094798 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 06:10:40.899834  302200 network_create.go:284] running [docker network inspect kindnet-094798] to gather additional debugging logs...
	I1210 06:10:40.899863  302200 cli_runner.go:164] Run: docker network inspect kindnet-094798
	W1210 06:10:40.928166  302200 cli_runner.go:211] docker network inspect kindnet-094798 returned with exit code 1
	I1210 06:10:40.928199  302200 network_create.go:287] error running [docker network inspect kindnet-094798]: docker network inspect kindnet-094798: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-094798 not found
	I1210 06:10:40.928216  302200 network_create.go:289] output of [docker network inspect kindnet-094798]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-094798 not found
	
	** /stderr **
	I1210 06:10:40.928365  302200 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:10:40.956957  302200 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9ebf62c95cf7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d2:a8:ac:6e:16:1a} reservation:<nil>}
	I1210 06:10:40.959073  302200 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ad22705e186e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:52:8a:92:75:2c:7b} reservation:<nil>}
	I1210 06:10:40.959841  302200 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-782a6994f202 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3e:35:84:e8:81:18} reservation:<nil>}
	I1210 06:10:40.960683  302200 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-0397b6bc6aea IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:92:c5:49:61:c0:1c} reservation:<nil>}
	I1210 06:10:40.961771  302200 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ec5750}
	I1210 06:10:40.961819  302200 network_create.go:124] attempt to create docker network kindnet-094798 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1210 06:10:40.961886  302200 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-094798 kindnet-094798
	I1210 06:10:41.019644  302200 network_create.go:108] docker network kindnet-094798 192.168.85.0/24 created
	I1210 06:10:41.019675  302200 kic.go:121] calculated static IP "192.168.85.2" for the "kindnet-094798" container
	I1210 06:10:41.019743  302200 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 06:10:41.038354  302200 cli_runner.go:164] Run: docker volume create kindnet-094798 --label name.minikube.sigs.k8s.io=kindnet-094798 --label created_by.minikube.sigs.k8s.io=true
	I1210 06:10:41.056638  302200 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:10:41.057440  302200 oci.go:103] Successfully created a docker volume kindnet-094798
	I1210 06:10:41.057514  302200 cli_runner.go:164] Run: docker run --rm --name kindnet-094798-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-094798 --entrypoint /usr/bin/test -v kindnet-094798:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 06:10:41.206379  302200 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:10:41.334454  302200 cache.go:107] acquiring lock: {Name:mk0763a50664c56b0862900e71862307cba94d4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:10:41.334471  302200 cache.go:107] acquiring lock: {Name:mk796942baeaa838a47daad2be5ca7532234da42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:10:41.334505  302200 cache.go:107] acquiring lock: {Name:mkcb073544c2d92de0e0765e38c37b4f4d2ac46b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:10:41.334510  302200 cache.go:107] acquiring lock: {Name:mkd670cede0997c7eb0e9bd388a82e1cb2741031 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:10:41.334471  302200 cache.go:107] acquiring lock: {Name:mkc3a95f67321b2fa8faeb966829fb60cf65d25d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:10:41.334476  302200 cache.go:107] acquiring lock: {Name:mkdd768341d1a3481ecaec697219b32d4a715834 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:10:41.334508  302200 cache.go:107] acquiring lock: {Name:mk4d792f4bac33dc8779d7cc5ff40393c94e0ea4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:10:41.334583  302200 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:10:41.334601  302200 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 166.467µs
	I1210 06:10:41.334612  302200 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:10:41.334623  302200 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 exists
	I1210 06:10:41.334637  302200 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1210 06:10:41.334649  302200 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 exists
	I1210 06:10:41.334649  302200 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 179.947µs
	I1210 06:10:41.334657  302200 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1210 06:10:41.334658  302200 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3" took 154.857µs
	I1210 06:10:41.334668  302200 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:10:41.334674  302200 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 177.71µs
	I1210 06:10:41.334683  302200 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:10:41.334669  302200 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 succeeded
	I1210 06:10:41.334687  302200 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 exists
	I1210 06:10:41.334634  302200 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3" took 128.85µs
	I1210 06:10:41.334716  302200 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3" took 256.883µs
	I1210 06:10:41.334726  302200 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 succeeded
	I1210 06:10:41.334696  302200 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1210 06:10:41.334714  302200 cache.go:107] acquiring lock: {Name:mk4839690ba979036496a7cee1de2814aaad3bf0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:10:41.334752  302200 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 291.13µs
	I1210 06:10:41.334749  302200 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 succeeded
	I1210 06:10:41.334779  302200 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1210 06:10:41.334842  302200 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 exists
	I1210 06:10:41.334860  302200 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3" took 191.547µs
	I1210 06:10:41.334871  302200 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 succeeded
	I1210 06:10:41.334887  302200 cache.go:87] Successfully saved all images to host disk.
	I1210 06:10:42.015731  302200 oci.go:107] Successfully prepared a docker volume kindnet-094798
	I1210 06:10:42.015790  302200 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	W1210 06:10:42.015903  302200 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1210 06:10:42.015938  302200 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1210 06:10:42.016008  302200 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 06:10:42.082131  302200 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-094798 --name kindnet-094798 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-094798 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-094798 --network kindnet-094798 --ip 192.168.85.2 --volume kindnet-094798:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 06:10:42.461394  302200 cli_runner.go:164] Run: docker container inspect kindnet-094798 --format={{.State.Running}}
	I1210 06:10:42.478783  302200 cli_runner.go:164] Run: docker container inspect kindnet-094798 --format={{.State.Status}}
	I1210 06:10:42.497133  302200 cli_runner.go:164] Run: docker exec kindnet-094798 stat /var/lib/dpkg/alternatives/iptables
	I1210 06:10:42.549911  302200 oci.go:144] the created container "kindnet-094798" has a running status.
	I1210 06:10:42.549944  302200 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22094-5725/.minikube/machines/kindnet-094798/id_rsa...
	I1210 06:10:42.613910  302200 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22094-5725/.minikube/machines/kindnet-094798/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 06:10:42.791148  302200 cli_runner.go:164] Run: docker container inspect kindnet-094798 --format={{.State.Status}}
	I1210 06:10:42.818281  302200 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 06:10:42.818302  302200 kic_runner.go:114] Args: [docker exec --privileged kindnet-094798 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 06:10:42.885896  302200 cli_runner.go:164] Run: docker container inspect kindnet-094798 --format={{.State.Status}}
	I1210 06:10:42.913073  302200 machine.go:94] provisionDockerMachine start ...
	I1210 06:10:42.913178  302200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-094798
	I1210 06:10:42.939674  302200 main.go:143] libmachine: Using SSH client type: native
	I1210 06:10:42.940038  302200 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1210 06:10:42.940058  302200 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:10:43.093738  302200 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-094798
	
	I1210 06:10:43.093777  302200 ubuntu.go:182] provisioning hostname "kindnet-094798"
	I1210 06:10:43.093846  302200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-094798
	I1210 06:10:43.119149  302200 main.go:143] libmachine: Using SSH client type: native
	I1210 06:10:43.119474  302200 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1210 06:10:43.119497  302200 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-094798 && echo "kindnet-094798" | sudo tee /etc/hostname
	I1210 06:10:43.289995  302200 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-094798
	
	I1210 06:10:43.290163  302200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-094798
	I1210 06:10:43.310205  302200 main.go:143] libmachine: Using SSH client type: native
	I1210 06:10:43.310492  302200 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1210 06:10:43.310518  302200 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-094798' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-094798/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-094798' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:10:43.449253  302200 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:10:43.449342  302200 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-5725/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-5725/.minikube}
	I1210 06:10:43.449389  302200 ubuntu.go:190] setting up certificates
	I1210 06:10:43.449420  302200 provision.go:84] configureAuth start
	I1210 06:10:43.449475  302200 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-094798
	I1210 06:10:43.471977  302200 provision.go:143] copyHostCerts
	I1210 06:10:43.472033  302200 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem, removing ...
	I1210 06:10:43.472046  302200 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem
	I1210 06:10:43.472139  302200 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem (1078 bytes)
	I1210 06:10:43.472252  302200 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem, removing ...
	I1210 06:10:43.472273  302200 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem
	I1210 06:10:43.472318  302200 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem (1123 bytes)
	I1210 06:10:43.472390  302200 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem, removing ...
	I1210 06:10:43.472405  302200 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem
	I1210 06:10:43.472438  302200 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem (1679 bytes)
	I1210 06:10:43.472516  302200 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem org=jenkins.kindnet-094798 san=[127.0.0.1 192.168.85.2 kindnet-094798 localhost minikube]
	I1210 06:10:43.637009  302200 provision.go:177] copyRemoteCerts
	I1210 06:10:43.637061  302200 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:10:43.637115  302200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-094798
	I1210 06:10:43.658553  302200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/kindnet-094798/id_rsa Username:docker}
	I1210 06:10:43.757886  302200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:10:43.780472  302200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1210 06:10:43.801966  302200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 06:10:43.823504  302200 provision.go:87] duration metric: took 374.064124ms to configureAuth
	I1210 06:10:43.823531  302200 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:10:43.823711  302200 config.go:182] Loaded profile config "kindnet-094798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:10:43.823835  302200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-094798
	I1210 06:10:43.844959  302200 main.go:143] libmachine: Using SSH client type: native
	I1210 06:10:43.845265  302200 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1210 06:10:43.845292  302200 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:10:44.163201  302200 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:10:44.163224  302200 machine.go:97] duration metric: took 1.250112713s to provisionDockerMachine
	I1210 06:10:44.163234  302200 client.go:176] duration metric: took 3.28842395s to LocalClient.Create
	I1210 06:10:44.163245  302200 start.go:167] duration metric: took 3.288482122s to libmachine.API.Create "kindnet-094798"
	I1210 06:10:44.163252  302200 start.go:293] postStartSetup for "kindnet-094798" (driver="docker")
	I1210 06:10:44.163260  302200 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:10:44.163304  302200 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:10:44.163336  302200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-094798
	I1210 06:10:44.183120  302200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/kindnet-094798/id_rsa Username:docker}
	I1210 06:10:44.289321  302200 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:10:44.293234  302200 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:10:44.293266  302200 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:10:44.293276  302200 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/addons for local assets ...
	I1210 06:10:44.293333  302200 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/files for local assets ...
	I1210 06:10:44.293476  302200 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem -> 92532.pem in /etc/ssl/certs
	I1210 06:10:44.293621  302200 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:10:44.301797  302200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:10:44.326196  302200 start.go:296] duration metric: took 162.929971ms for postStartSetup
	I1210 06:10:44.326593  302200 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-094798
	I1210 06:10:44.345760  302200 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/kindnet-094798/config.json ...
	I1210 06:10:44.346051  302200 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:10:44.346160  302200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-094798
	I1210 06:10:44.364405  302200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/kindnet-094798/id_rsa Username:docker}
	I1210 06:10:44.463185  302200 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:10:44.469341  302200 start.go:128] duration metric: took 3.59705785s to createHost
	I1210 06:10:44.469365  302200 start.go:83] releasing machines lock for "kindnet-094798", held for 3.597192515s
	I1210 06:10:44.469435  302200 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-094798
	I1210 06:10:44.491115  302200 ssh_runner.go:195] Run: cat /version.json
	I1210 06:10:44.491132  302200 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:10:44.491167  302200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-094798
	I1210 06:10:44.491208  302200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-094798
	I1210 06:10:44.511371  302200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/kindnet-094798/id_rsa Username:docker}
	I1210 06:10:44.513385  302200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/kindnet-094798/id_rsa Username:docker}
	I1210 06:10:44.689031  302200 ssh_runner.go:195] Run: systemctl --version
	I1210 06:10:44.698132  302200 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:10:44.746671  302200 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:10:44.752056  302200 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:10:44.752146  302200 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:10:44.785107  302200 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 06:10:44.785135  302200 start.go:496] detecting cgroup driver to use...
	I1210 06:10:44.785166  302200 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:10:44.785213  302200 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:10:44.803792  302200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:10:44.816894  302200 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:10:44.816934  302200 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:10:44.834670  302200 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:10:44.852479  302200 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:10:44.960233  302200 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:10:45.064798  302200 docker.go:234] disabling docker service ...
	I1210 06:10:45.064859  302200 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:10:45.086883  302200 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:10:45.100801  302200 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:10:45.191049  302200 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:10:45.280393  302200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:10:45.293555  302200 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:10:45.308439  302200 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:10:45.447452  302200 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:10:45.447507  302200 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:45.457736  302200 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:10:45.457787  302200 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:45.466192  302200 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:45.474474  302200 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:45.482468  302200 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:10:45.490122  302200 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:45.498552  302200 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:43.886148  300941 addons.go:530] duration metric: took 3.299436ms for enable addons: enabled=[]
	I1210 06:10:43.886186  300941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:10:44.025176  300941 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:10:44.042730  300941 node_ready.go:35] waiting up to 6m0s for node "pause-257171" to be "Ready" ...
	I1210 06:10:44.051926  300941 node_ready.go:49] node "pause-257171" is "Ready"
	I1210 06:10:44.051947  300941 node_ready.go:38] duration metric: took 9.176448ms for node "pause-257171" to be "Ready" ...
	I1210 06:10:44.051959  300941 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:10:44.051996  300941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:10:44.064662  300941 api_server.go:72] duration metric: took 181.871597ms to wait for apiserver process to appear ...
	I1210 06:10:44.064688  300941 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:10:44.064706  300941 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:10:44.069621  300941 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1210 06:10:44.070741  300941 api_server.go:141] control plane version: v1.34.3
	I1210 06:10:44.070781  300941 api_server.go:131] duration metric: took 6.085464ms to wait for apiserver health ...
	I1210 06:10:44.070792  300941 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:10:44.074482  300941 system_pods.go:59] 7 kube-system pods found
	I1210 06:10:44.074522  300941 system_pods.go:61] "coredns-66bc5c9577-t6x5x" [b893f947-02d7-41b1-9886-9b0830ddf69c] Running
	I1210 06:10:44.074536  300941 system_pods.go:61] "etcd-pause-257171" [9e550ac3-4987-44a9-9425-c7758c2d698e] Running
	I1210 06:10:44.074544  300941 system_pods.go:61] "kindnet-8nqff" [afb8ed20-85e5-48ca-9b80-aba3e0f6e330] Running
	I1210 06:10:44.074549  300941 system_pods.go:61] "kube-apiserver-pause-257171" [9fade971-ee7e-4542-8911-93a6aa0fed0c] Running
	I1210 06:10:44.074558  300941 system_pods.go:61] "kube-controller-manager-pause-257171" [caac76e5-4ae6-4766-aa2c-31badc24a748] Running
	I1210 06:10:44.074569  300941 system_pods.go:61] "kube-proxy-hd5t7" [5c7c8775-6a41-44e3-b6b1-7a6a2b4c4942] Running
	I1210 06:10:44.074577  300941 system_pods.go:61] "kube-scheduler-pause-257171" [ccca1914-c8e1-4da2-b9fc-60e3b1097de8] Running
	I1210 06:10:44.074585  300941 system_pods.go:74] duration metric: took 3.785527ms to wait for pod list to return data ...
	I1210 06:10:44.074595  300941 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:10:44.077115  300941 default_sa.go:45] found service account: "default"
	I1210 06:10:44.077135  300941 default_sa.go:55] duration metric: took 2.528645ms for default service account to be created ...
	I1210 06:10:44.077144  300941 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 06:10:44.079945  300941 system_pods.go:86] 7 kube-system pods found
	I1210 06:10:44.079972  300941 system_pods.go:89] "coredns-66bc5c9577-t6x5x" [b893f947-02d7-41b1-9886-9b0830ddf69c] Running
	I1210 06:10:44.079980  300941 system_pods.go:89] "etcd-pause-257171" [9e550ac3-4987-44a9-9425-c7758c2d698e] Running
	I1210 06:10:44.079986  300941 system_pods.go:89] "kindnet-8nqff" [afb8ed20-85e5-48ca-9b80-aba3e0f6e330] Running
	I1210 06:10:44.079991  300941 system_pods.go:89] "kube-apiserver-pause-257171" [9fade971-ee7e-4542-8911-93a6aa0fed0c] Running
	I1210 06:10:44.079998  300941 system_pods.go:89] "kube-controller-manager-pause-257171" [caac76e5-4ae6-4766-aa2c-31badc24a748] Running
	I1210 06:10:44.080004  300941 system_pods.go:89] "kube-proxy-hd5t7" [5c7c8775-6a41-44e3-b6b1-7a6a2b4c4942] Running
	I1210 06:10:44.080014  300941 system_pods.go:89] "kube-scheduler-pause-257171" [ccca1914-c8e1-4da2-b9fc-60e3b1097de8] Running
	I1210 06:10:44.080023  300941 system_pods.go:126] duration metric: took 2.873065ms to wait for k8s-apps to be running ...
	I1210 06:10:44.080037  300941 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 06:10:44.080106  300941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:10:44.095165  300941 system_svc.go:56] duration metric: took 15.120844ms WaitForService to wait for kubelet
	I1210 06:10:44.095188  300941 kubeadm.go:587] duration metric: took 212.405865ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:10:44.095207  300941 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:10:44.098252  300941 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:10:44.098277  300941 node_conditions.go:123] node cpu capacity is 8
	I1210 06:10:44.098293  300941 node_conditions.go:105] duration metric: took 3.080616ms to run NodePressure ...
	I1210 06:10:44.098307  300941 start.go:242] waiting for startup goroutines ...
	I1210 06:10:44.098317  300941 start.go:247] waiting for cluster config update ...
	I1210 06:10:44.098328  300941 start.go:256] writing updated cluster config ...
	I1210 06:10:44.099999  300941 ssh_runner.go:195] Run: rm -f paused
	I1210 06:10:44.104185  300941 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:10:44.104786  300941 kapi.go:59] client config for pause-257171: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22094-5725/.minikube/profiles/pause-257171/client.crt", KeyFile:"/home/jenkins/minikube-integration/22094-5725/.minikube/profiles/pause-257171/client.key", CAFile:"/home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 06:10:44.107329  300941 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-t6x5x" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:10:44.112237  300941 pod_ready.go:94] pod "coredns-66bc5c9577-t6x5x" is "Ready"
	I1210 06:10:44.112259  300941 pod_ready.go:86] duration metric: took 4.908574ms for pod "coredns-66bc5c9577-t6x5x" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:10:44.114493  300941 pod_ready.go:83] waiting for pod "etcd-pause-257171" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:10:44.119694  300941 pod_ready.go:94] pod "etcd-pause-257171" is "Ready"
	I1210 06:10:44.119716  300941 pod_ready.go:86] duration metric: took 5.203561ms for pod "etcd-pause-257171" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:10:44.121850  300941 pod_ready.go:83] waiting for pod "kube-apiserver-pause-257171" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:10:44.126981  300941 pod_ready.go:94] pod "kube-apiserver-pause-257171" is "Ready"
	I1210 06:10:44.127001  300941 pod_ready.go:86] duration metric: took 5.133144ms for pod "kube-apiserver-pause-257171" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:10:44.129462  300941 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-257171" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:10:44.509792  300941 pod_ready.go:94] pod "kube-controller-manager-pause-257171" is "Ready"
	I1210 06:10:44.509824  300941 pod_ready.go:86] duration metric: took 380.342511ms for pod "kube-controller-manager-pause-257171" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:10:44.710389  300941 pod_ready.go:83] waiting for pod "kube-proxy-hd5t7" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:10:45.108844  300941 pod_ready.go:94] pod "kube-proxy-hd5t7" is "Ready"
	I1210 06:10:45.108877  300941 pod_ready.go:86] duration metric: took 398.461834ms for pod "kube-proxy-hd5t7" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:10:45.309300  300941 pod_ready.go:83] waiting for pod "kube-scheduler-pause-257171" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:10:45.708540  300941 pod_ready.go:94] pod "kube-scheduler-pause-257171" is "Ready"
	I1210 06:10:45.708574  300941 pod_ready.go:86] duration metric: took 399.245118ms for pod "kube-scheduler-pause-257171" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:10:45.708589  300941 pod_ready.go:40] duration metric: took 1.604335403s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:10:45.755967  300941 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1210 06:10:45.757591  300941 out.go:179] * Done! kubectl is now configured to use "pause-257171" cluster and "default" namespace by default
	I1210 06:10:45.512907  302200 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:45.521342  302200 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:10:45.528560  302200 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:10:45.535580  302200 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:10:45.619386  302200 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:10:45.750523  302200 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:10:45.750593  302200 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:10:45.755017  302200 start.go:564] Will wait 60s for crictl version
	I1210 06:10:45.755072  302200 ssh_runner.go:195] Run: which crictl
	I1210 06:10:45.759162  302200 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:10:45.786024  302200 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:10:45.786132  302200 ssh_runner.go:195] Run: crio --version
	I1210 06:10:45.819667  302200 ssh_runner.go:195] Run: crio --version
	I1210 06:10:45.852447  302200 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1210 06:10:41.529996  295895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:10:41.621360  295895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:10:41.639472  295895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/auto-094798/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1210 06:10:41.658069  295895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/auto-094798/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 06:10:41.675588  295895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/auto-094798/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:10:41.695516  295895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/auto-094798/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:10:41.719465  295895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:10:41.740054  295895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253.pem --> /usr/share/ca-certificates/9253.pem (1338 bytes)
	I1210 06:10:41.757580  295895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /usr/share/ca-certificates/92532.pem (1708 bytes)
	I1210 06:10:41.776676  295895 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:10:41.790522  295895 ssh_runner.go:195] Run: openssl version
	I1210 06:10:41.796973  295895 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:10:41.804111  295895 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:10:41.811770  295895 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:10:41.815900  295895 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:10:41.815950  295895 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:10:41.860019  295895 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:10:41.873722  295895 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 06:10:41.882903  295895 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9253.pem
	I1210 06:10:41.891738  295895 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9253.pem /etc/ssl/certs/9253.pem
	I1210 06:10:41.901891  295895 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9253.pem
	I1210 06:10:41.906203  295895 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:37 /usr/share/ca-certificates/9253.pem
	I1210 06:10:41.906257  295895 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9253.pem
	I1210 06:10:41.954459  295895 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:10:41.965493  295895 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9253.pem /etc/ssl/certs/51391683.0
	I1210 06:10:41.974456  295895 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/92532.pem
	I1210 06:10:41.990887  295895 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/92532.pem /etc/ssl/certs/92532.pem
	I1210 06:10:41.999525  295895 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92532.pem
	I1210 06:10:42.003444  295895 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:37 /usr/share/ca-certificates/92532.pem
	I1210 06:10:42.003510  295895 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92532.pem
	I1210 06:10:42.058065  295895 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:10:42.069005  295895 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/92532.pem /etc/ssl/certs/3ec20f2e.0
	I1210 06:10:42.078573  295895 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:10:42.083210  295895 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 06:10:42.083270  295895 kubeadm.go:401] StartCluster: {Name:auto-094798 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:auto-094798 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:10:42.083366  295895 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:10:42.083412  295895 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:10:42.117874  295895 cri.go:89] found id: ""
	I1210 06:10:42.117944  295895 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:10:42.128028  295895 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:10:42.136643  295895 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:10:42.136703  295895 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:10:42.145951  295895 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:10:42.145971  295895 kubeadm.go:158] found existing configuration files:
	
	I1210 06:10:42.146016  295895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 06:10:42.155833  295895 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:10:42.155897  295895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:10:42.165338  295895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 06:10:42.173982  295895 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:10:42.174021  295895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:10:42.181998  295895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 06:10:42.190321  295895 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:10:42.190376  295895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:10:42.198265  295895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 06:10:42.205910  295895 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:10:42.205959  295895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:10:42.214513  295895 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:10:42.285066  295895 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1210 06:10:42.351519  295895 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:10:43.421322  304042 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 06:10:43.421591  304042 start.go:159] libmachine.API.Create for "calico-094798" (driver="docker")
	I1210 06:10:43.421627  304042 client.go:173] LocalClient.Create starting
	I1210 06:10:43.421702  304042 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem
	I1210 06:10:43.421738  304042 main.go:143] libmachine: Decoding PEM data...
	I1210 06:10:43.421765  304042 main.go:143] libmachine: Parsing certificate...
	I1210 06:10:43.421826  304042 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem
	I1210 06:10:43.421852  304042 main.go:143] libmachine: Decoding PEM data...
	I1210 06:10:43.421866  304042 main.go:143] libmachine: Parsing certificate...
	I1210 06:10:43.422314  304042 cli_runner.go:164] Run: docker network inspect calico-094798 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 06:10:43.443474  304042 cli_runner.go:211] docker network inspect calico-094798 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 06:10:43.443551  304042 network_create.go:284] running [docker network inspect calico-094798] to gather additional debugging logs...
	I1210 06:10:43.443574  304042 cli_runner.go:164] Run: docker network inspect calico-094798
	W1210 06:10:43.463840  304042 cli_runner.go:211] docker network inspect calico-094798 returned with exit code 1
	I1210 06:10:43.463872  304042 network_create.go:287] error running [docker network inspect calico-094798]: docker network inspect calico-094798: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-094798 not found
	I1210 06:10:43.463897  304042 network_create.go:289] output of [docker network inspect calico-094798]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-094798 not found
	
	** /stderr **
	I1210 06:10:43.464025  304042 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:10:43.486790  304042 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9ebf62c95cf7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d2:a8:ac:6e:16:1a} reservation:<nil>}
	I1210 06:10:43.487449  304042 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ad22705e186e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:52:8a:92:75:2c:7b} reservation:<nil>}
	I1210 06:10:43.488048  304042 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-782a6994f202 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3e:35:84:e8:81:18} reservation:<nil>}
	I1210 06:10:43.488636  304042 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-0397b6bc6aea IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:92:c5:49:61:c0:1c} reservation:<nil>}
	I1210 06:10:43.489444  304042 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-d6a8c526f793 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:22:92:d6:c3:5a:8b} reservation:<nil>}
	I1210 06:10:43.490247  304042 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-0014a58b806a IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:86:6f:7e:ad:4f:6b} reservation:<nil>}
	I1210 06:10:43.490983  304042 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f596d0}
	I1210 06:10:43.491010  304042 network_create.go:124] attempt to create docker network calico-094798 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1210 06:10:43.491070  304042 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-094798 calico-094798
	I1210 06:10:43.544306  304042 network_create.go:108] docker network calico-094798 192.168.103.0/24 created
	I1210 06:10:43.544336  304042 kic.go:121] calculated static IP "192.168.103.2" for the "calico-094798" container
	I1210 06:10:43.544408  304042 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 06:10:43.563025  304042 cli_runner.go:164] Run: docker volume create calico-094798 --label name.minikube.sigs.k8s.io=calico-094798 --label created_by.minikube.sigs.k8s.io=true
	I1210 06:10:43.568562  304042 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:10:43.584437  304042 oci.go:103] Successfully created a docker volume calico-094798
	I1210 06:10:43.584514  304042 cli_runner.go:164] Run: docker run --rm --name calico-094798-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-094798 --entrypoint /usr/bin/test -v calico-094798:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 06:10:43.727989  304042 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:10:43.883008  304042 cache.go:107] acquiring lock: {Name:mkc3a95f67321b2fa8faeb966829fb60cf65d25d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:10:43.883044  304042 cache.go:107] acquiring lock: {Name:mkd670cede0997c7eb0e9bd388a82e1cb2741031 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:10:43.883104  304042 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 exists
	I1210 06:10:43.883117  304042 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3" took 118.348µs
	I1210 06:10:43.883128  304042 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 succeeded
	I1210 06:10:43.883148  304042 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:10:43.883146  304042 cache.go:107] acquiring lock: {Name:mkcb073544c2d92de0e0765e38c37b4f4d2ac46b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:10:43.883164  304042 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 122.723µs
	I1210 06:10:43.883174  304042 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:10:43.883189  304042 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 exists
	I1210 06:10:43.883190  304042 cache.go:107] acquiring lock: {Name:mkdd768341d1a3481ecaec697219b32d4a715834 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:10:43.883203  304042 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3" took 59.737µs
	I1210 06:10:43.883214  304042 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 succeeded
	I1210 06:10:43.883209  304042 cache.go:107] acquiring lock: {Name:mk796942baeaa838a47daad2be5ca7532234da42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:10:43.883231  304042 cache.go:107] acquiring lock: {Name:mk4d792f4bac33dc8779d7cc5ff40393c94e0ea4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:10:43.883268  304042 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 exists
	I1210 06:10:43.883251  304042 cache.go:107] acquiring lock: {Name:mk4839690ba979036496a7cee1de2814aaad3bf0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:10:43.883280  304042 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3" took 52.159µs
	I1210 06:10:43.883298  304042 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 succeeded
	I1210 06:10:43.883236  304042 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1210 06:10:43.883304  304042 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 exists
	I1210 06:10:43.883312  304042 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 124.77µs
	I1210 06:10:43.883313  304042 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3" took 64.318µs
	I1210 06:10:43.883320  304042 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1210 06:10:43.883322  304042 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 succeeded
	I1210 06:10:43.883284  304042 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1210 06:10:43.883334  304042 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 146.66µs
	I1210 06:10:43.883344  304042 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1210 06:10:43.883000  304042 cache.go:107] acquiring lock: {Name:mk0763a50664c56b0862900e71862307cba94d4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:10:43.883377  304042 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:10:43.883399  304042 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 416.293µs
	I1210 06:10:43.883407  304042 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:10:43.883421  304042 cache.go:87] Successfully saved all images to host disk.
	I1210 06:10:44.004348  304042 oci.go:107] Successfully prepared a docker volume calico-094798
	I1210 06:10:44.004431  304042 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	W1210 06:10:44.004538  304042 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1210 06:10:44.004579  304042 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1210 06:10:44.004643  304042 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 06:10:44.068299  304042 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-094798 --name calico-094798 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-094798 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-094798 --network calico-094798 --ip 192.168.103.2 --volume calico-094798:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 06:10:44.378476  304042 cli_runner.go:164] Run: docker container inspect calico-094798 --format={{.State.Running}}
	I1210 06:10:44.398439  304042 cli_runner.go:164] Run: docker container inspect calico-094798 --format={{.State.Status}}
	I1210 06:10:44.416764  304042 cli_runner.go:164] Run: docker exec calico-094798 stat /var/lib/dpkg/alternatives/iptables
	I1210 06:10:44.471779  304042 oci.go:144] the created container "calico-094798" has a running status.
	I1210 06:10:44.471807  304042 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22094-5725/.minikube/machines/calico-094798/id_rsa...
	I1210 06:10:44.620373  304042 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22094-5725/.minikube/machines/calico-094798/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 06:10:44.645911  304042 cli_runner.go:164] Run: docker container inspect calico-094798 --format={{.State.Status}}
	I1210 06:10:44.676511  304042 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 06:10:44.676536  304042 kic_runner.go:114] Args: [docker exec --privileged calico-094798 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 06:10:44.735673  304042 cli_runner.go:164] Run: docker container inspect calico-094798 --format={{.State.Status}}
	I1210 06:10:44.762065  304042 machine.go:94] provisionDockerMachine start ...
	I1210 06:10:44.762193  304042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-094798
	I1210 06:10:44.785616  304042 main.go:143] libmachine: Using SSH client type: native
	I1210 06:10:44.785875  304042 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1210 06:10:44.785899  304042 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:10:44.926970  304042 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-094798
	
	I1210 06:10:44.927004  304042 ubuntu.go:182] provisioning hostname "calico-094798"
	I1210 06:10:44.927105  304042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-094798
	I1210 06:10:44.950331  304042 main.go:143] libmachine: Using SSH client type: native
	I1210 06:10:44.950659  304042 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1210 06:10:44.950676  304042 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-094798 && echo "calico-094798" | sudo tee /etc/hostname
	I1210 06:10:45.104421  304042 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-094798
	
	I1210 06:10:45.104496  304042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-094798
	I1210 06:10:45.126913  304042 main.go:143] libmachine: Using SSH client type: native
	I1210 06:10:45.127138  304042 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1210 06:10:45.127156  304042 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-094798' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-094798/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-094798' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:10:45.274866  304042 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:10:45.274892  304042 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-5725/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-5725/.minikube}
	I1210 06:10:45.274925  304042 ubuntu.go:190] setting up certificates
	I1210 06:10:45.274937  304042 provision.go:84] configureAuth start
	I1210 06:10:45.275008  304042 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-094798
	I1210 06:10:45.294720  304042 provision.go:143] copyHostCerts
	I1210 06:10:45.294791  304042 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem, removing ...
	I1210 06:10:45.294804  304042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem
	I1210 06:10:45.294889  304042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem (1123 bytes)
	I1210 06:10:45.295006  304042 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem, removing ...
	I1210 06:10:45.295019  304042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem
	I1210 06:10:45.295062  304042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem (1679 bytes)
	I1210 06:10:45.295170  304042 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem, removing ...
	I1210 06:10:45.295181  304042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem
	I1210 06:10:45.295220  304042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem (1078 bytes)
	I1210 06:10:45.295306  304042 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem org=jenkins.calico-094798 san=[127.0.0.1 192.168.103.2 calico-094798 localhost minikube]
	I1210 06:10:45.311527  304042 provision.go:177] copyRemoteCerts
	I1210 06:10:45.311591  304042 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:10:45.311640  304042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-094798
	I1210 06:10:45.331393  304042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/calico-094798/id_rsa Username:docker}
	I1210 06:10:45.429975  304042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:10:45.451128  304042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 06:10:45.468449  304042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:10:45.484996  304042 provision.go:87] duration metric: took 210.038413ms to configureAuth
	I1210 06:10:45.485019  304042 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:10:45.485211  304042 config.go:182] Loaded profile config "calico-094798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:10:45.485317  304042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-094798
	I1210 06:10:45.504389  304042 main.go:143] libmachine: Using SSH client type: native
	I1210 06:10:45.504704  304042 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1210 06:10:45.504727  304042 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:10:45.803213  304042 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:10:45.803246  304042 machine.go:97] duration metric: took 1.041137813s to provisionDockerMachine
	I1210 06:10:45.803259  304042 client.go:176] duration metric: took 2.381622423s to LocalClient.Create
	I1210 06:10:45.803296  304042 start.go:167] duration metric: took 2.381697835s to libmachine.API.Create "calico-094798"
	I1210 06:10:45.803311  304042 start.go:293] postStartSetup for "calico-094798" (driver="docker")
	I1210 06:10:45.803329  304042 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:10:45.803397  304042 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:10:45.803449  304042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-094798
	I1210 06:10:45.826393  304042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/calico-094798/id_rsa Username:docker}
	I1210 06:10:45.929332  304042 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:10:45.933747  304042 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:10:45.933787  304042 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:10:45.933799  304042 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/addons for local assets ...
	I1210 06:10:45.933854  304042 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/files for local assets ...
	I1210 06:10:45.933944  304042 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem -> 92532.pem in /etc/ssl/certs
	I1210 06:10:45.934059  304042 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:10:45.942836  304042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:10:45.965040  304042 start.go:296] duration metric: took 161.709177ms for postStartSetup
	I1210 06:10:45.965489  304042 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-094798
	I1210 06:10:45.984828  304042 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/calico-094798/config.json ...
	I1210 06:10:45.985137  304042 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:10:45.985187  304042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-094798
	I1210 06:10:46.004324  304042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/calico-094798/id_rsa Username:docker}
	I1210 06:10:46.097960  304042 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:10:46.102658  304042 start.go:128] duration metric: took 2.683035841s to createHost
	I1210 06:10:46.102683  304042 start.go:83] releasing machines lock for "calico-094798", held for 2.683191529s
	I1210 06:10:46.102759  304042 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-094798
	I1210 06:10:46.121715  304042 ssh_runner.go:195] Run: cat /version.json
	I1210 06:10:46.121782  304042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-094798
	I1210 06:10:46.121836  304042 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:10:46.121912  304042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-094798
	I1210 06:10:46.147794  304042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/calico-094798/id_rsa Username:docker}
	I1210 06:10:46.148720  304042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/calico-094798/id_rsa Username:docker}
	I1210 06:10:46.307934  304042 ssh_runner.go:195] Run: systemctl --version
	I1210 06:10:46.314861  304042 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:10:46.346962  304042 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:10:46.351671  304042 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:10:46.351730  304042 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:10:46.379313  304042 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 06:10:46.379333  304042 start.go:496] detecting cgroup driver to use...
	I1210 06:10:46.379361  304042 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:10:46.379404  304042 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:10:46.400331  304042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:10:46.413428  304042 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:10:46.413478  304042 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:10:46.430559  304042 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:10:46.446845  304042 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:10:46.537721  304042 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:10:46.672740  304042 docker.go:234] disabling docker service ...
	I1210 06:10:46.672802  304042 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:10:46.710487  304042 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:10:46.737474  304042 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:10:46.865650  304042 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:10:46.984780  304042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:10:47.002560  304042 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:10:47.022612  304042 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:10:47.200663  304042 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:10:47.200725  304042 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:47.344695  304042 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:10:47.344767  304042 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:47.356349  304042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:47.366907  304042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:47.377751  304042 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:10:47.389305  304042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:47.401811  304042 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:47.423304  304042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:47.435210  304042 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:10:47.445289  304042 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:10:47.455556  304042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:10:47.563129  304042 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:10:47.730797  304042 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:10:47.730864  304042 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:10:47.735209  304042 start.go:564] Will wait 60s for crictl version
	I1210 06:10:47.735257  304042 ssh_runner.go:195] Run: which crictl
	I1210 06:10:47.739329  304042 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:10:47.766455  304042 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:10:47.766538  304042 ssh_runner.go:195] Run: crio --version
	I1210 06:10:47.797025  304042 ssh_runner.go:195] Run: crio --version
	I1210 06:10:47.835590  304042 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1210 06:10:47.837187  304042 cli_runner.go:164] Run: docker network inspect calico-094798 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:10:47.855950  304042 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1210 06:10:47.859950  304042 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:10:47.873420  304042 kubeadm.go:884] updating cluster {Name:calico-094798 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:calico-094798 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:10:47.873636  304042 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:10:48.017869  304042 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:10:45.854239  302200 cli_runner.go:164] Run: docker network inspect kindnet-094798 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:10:45.874255  302200 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1210 06:10:45.878513  302200 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:10:45.889259  302200 kubeadm.go:884] updating cluster {Name:kindnet-094798 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:kindnet-094798 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:10:45.889453  302200 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:10:46.033592  302200 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:10:46.195265  302200 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:10:46.333509  302200 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 06:10:46.333575  302200 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:10:46.358862  302200 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.3". assuming images are not preloaded.
	I1210 06:10:46.358885  302200 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.3 registry.k8s.io/kube-controller-manager:v1.34.3 registry.k8s.io/kube-scheduler:v1.34.3 registry.k8s.io/kube-proxy:v1.34.3 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 06:10:46.359032  302200 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:10:46.359047  302200 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1210 06:10:46.359052  302200 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 06:10:46.359057  302200 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 06:10:46.359099  302200 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 06:10:46.359129  302200 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 06:10:46.359128  302200 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 06:10:46.359230  302200 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 06:10:46.360524  302200 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1210 06:10:46.360535  302200 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 06:10:46.360557  302200 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 06:10:46.360585  302200 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 06:10:46.360623  302200 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:10:46.360634  302200 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 06:10:46.360649  302200 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 06:10:46.360659  302200 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 06:10:46.509680  302200 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1210 06:10:46.519917  302200 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.3
	I1210 06:10:46.521988  302200 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.3
	I1210 06:10:46.522982  302200 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 06:10:46.529786  302200 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.3
	I1210 06:10:46.533897  302200 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1210 06:10:46.534896  302200 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1210 06:10:46.558104  302200 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1210 06:10:46.558148  302200 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1210 06:10:46.558194  302200 ssh_runner.go:195] Run: which crictl
	I1210 06:10:46.573680  302200 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.3" does not exist at hash "aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78" in container runtime
	I1210 06:10:46.573731  302200 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 06:10:46.573774  302200 ssh_runner.go:195] Run: which crictl
	I1210 06:10:46.576938  302200 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.3" does not exist at hash "aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c" in container runtime
	I1210 06:10:46.576984  302200 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 06:10:46.577029  302200 ssh_runner.go:195] Run: which crictl
	I1210 06:10:46.584383  302200 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.3" does not exist at hash "5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942" in container runtime
	I1210 06:10:46.584584  302200 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 06:10:46.584712  302200 ssh_runner.go:195] Run: which crictl
	I1210 06:10:46.595715  302200 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.3" needs transfer: "registry.k8s.io/kube-proxy:v1.34.3" does not exist at hash "36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691" in container runtime
	I1210 06:10:46.595765  302200 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 06:10:46.595808  302200 ssh_runner.go:195] Run: which crictl
	I1210 06:10:46.601941  302200 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1210 06:10:46.601976  302200 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1210 06:10:46.601989  302200 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1210 06:10:46.602016  302200 ssh_runner.go:195] Run: which crictl
	I1210 06:10:46.602028  302200 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 06:10:46.602069  302200 ssh_runner.go:195] Run: which crictl
	I1210 06:10:46.602115  302200 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 06:10:46.602138  302200 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 06:10:46.602163  302200 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 06:10:46.602175  302200 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 06:10:46.608399  302200 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 06:10:46.639964  302200 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:10:46.640736  302200 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 06:10:46.640736  302200 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 06:10:46.640778  302200 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 06:10:46.640805  302200 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 06:10:46.640845  302200 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 06:10:46.648780  302200 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 06:10:46.691272  302200 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:10:46.699936  302200 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 06:10:46.699993  302200 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 06:10:46.700141  302200 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 06:10:46.700142  302200 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 06:10:46.700205  302200 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 06:10:46.700238  302200 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 06:10:46.741513  302200 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:10:46.747431  302200 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:10:46.762016  302200 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 06:10:46.774382  302200 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0
	I1210 06:10:46.774488  302200 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1210 06:10:46.774669  302200 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3
	I1210 06:10:46.774715  302200 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3
	I1210 06:10:46.774746  302200 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3
	I1210 06:10:46.774760  302200 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 06:10:46.774805  302200 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 06:10:46.774843  302200 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 06:10:46.774856  302200 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3
	I1210 06:10:46.775007  302200 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 06:10:46.809447  302200 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1210 06:10:46.809489  302200 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1210 06:10:46.809491  302200 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:10:46.809533  302200 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.3': No such file or directory
	I1210 06:10:46.809553  302200 ssh_runner.go:195] Run: which crictl
	I1210 06:10:46.809559  302200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 --> /var/lib/minikube/images/kube-scheduler_v1.34.3 (17393664 bytes)
	I1210 06:10:46.809458  302200 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1210 06:10:46.809579  302200 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1210 06:10:46.809609  302200 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1210 06:10:46.809622  302200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1210 06:10:46.809652  302200 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 06:10:46.809672  302200 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.3': No such file or directory
	I1210 06:10:46.809686  302200 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.3': No such file or directory
	I1210 06:10:46.809737  302200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 --> /var/lib/minikube/images/kube-controller-manager_v1.34.3 (22830080 bytes)
	I1210 06:10:46.809826  302200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 --> /var/lib/minikube/images/kube-proxy_v1.34.3 (25966592 bytes)
	I1210 06:10:46.809648  302200 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.3': No such file or directory
	I1210 06:10:46.809866  302200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 --> /var/lib/minikube/images/kube-apiserver_v1.34.3 (27075584 bytes)
	I1210 06:10:46.826412  302200 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1210 06:10:46.826446  302200 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:10:46.826451  302200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1210 06:10:46.826604  302200 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 06:10:46.826630  302200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1210 06:10:46.970371  302200 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:10:46.970746  302200 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 06:10:46.970808  302200 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1210 06:10:47.048243  302200 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:10:47.436620  302200 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 06:10:47.436659  302200 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1210 06:10:47.436699  302200 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 06:10:47.436738  302200 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:10:47.436754  302200 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 06:10:48.723234  302200 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.3: (1.286460531s)
	I1210 06:10:48.723263  302200 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 from cache
	I1210 06:10:48.723284  302200 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1210 06:10:48.723326  302200 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1210 06:10:48.723421  302200 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.286671034s)
	I1210 06:10:48.723444  302200 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 06:10:48.723465  302200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	
	
	==> CRI-O <==
	Dec 10 06:10:41 pause-257171 crio[3291]: time="2025-12-10T06:10:41.919902072Z" level=info msg="RDT not available in the host system"
	Dec 10 06:10:41 pause-257171 crio[3291]: time="2025-12-10T06:10:41.919917491Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 10 06:10:41 pause-257171 crio[3291]: time="2025-12-10T06:10:41.92079743Z" level=info msg="Conmon does support the --sync option"
	Dec 10 06:10:41 pause-257171 crio[3291]: time="2025-12-10T06:10:41.920819729Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 10 06:10:41 pause-257171 crio[3291]: time="2025-12-10T06:10:41.920836201Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 10 06:10:41 pause-257171 crio[3291]: time="2025-12-10T06:10:41.922364334Z" level=info msg="Conmon does support the --sync option"
	Dec 10 06:10:41 pause-257171 crio[3291]: time="2025-12-10T06:10:41.922756344Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 10 06:10:41 pause-257171 crio[3291]: time="2025-12-10T06:10:41.928276624Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 06:10:41 pause-257171 crio[3291]: time="2025-12-10T06:10:41.928296986Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 06:10:41 pause-257171 crio[3291]: time="2025-12-10T06:10:41.929036174Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 10 06:10:41 pause-257171 crio[3291]: time="2025-12-10T06:10:41.929544392Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 10 06:10:41 pause-257171 crio[3291]: time="2025-12-10T06:10:41.929602758Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 10 06:10:42 pause-257171 crio[3291]: time="2025-12-10T06:10:42.017412053Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-t6x5x Namespace:kube-system ID:4b569e2c432bd7fcdf352a86eea5724968b9eca534f130dd5a643dc5b6f23e37 UID:b893f947-02d7-41b1-9886-9b0830ddf69c NetNS:/var/run/netns/8c041cec-d674-4da3-8914-4b8df975afe8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005162f8}] Aliases:map[]}"
	Dec 10 06:10:42 pause-257171 crio[3291]: time="2025-12-10T06:10:42.017802837Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-t6x5x for CNI network kindnet (type=ptp)"
	Dec 10 06:10:42 pause-257171 crio[3291]: time="2025-12-10T06:10:42.018417287Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 10 06:10:42 pause-257171 crio[3291]: time="2025-12-10T06:10:42.018440189Z" level=info msg="Starting seccomp notifier watcher"
	Dec 10 06:10:42 pause-257171 crio[3291]: time="2025-12-10T06:10:42.018499423Z" level=info msg="Create NRI interface"
	Dec 10 06:10:42 pause-257171 crio[3291]: time="2025-12-10T06:10:42.018609414Z" level=info msg="built-in NRI default validator is disabled"
	Dec 10 06:10:42 pause-257171 crio[3291]: time="2025-12-10T06:10:42.018622342Z" level=info msg="runtime interface created"
	Dec 10 06:10:42 pause-257171 crio[3291]: time="2025-12-10T06:10:42.018636271Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 10 06:10:42 pause-257171 crio[3291]: time="2025-12-10T06:10:42.018644485Z" level=info msg="runtime interface starting up..."
	Dec 10 06:10:42 pause-257171 crio[3291]: time="2025-12-10T06:10:42.018652339Z" level=info msg="starting plugins..."
	Dec 10 06:10:42 pause-257171 crio[3291]: time="2025-12-10T06:10:42.018667611Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 06:10:42 pause-257171 crio[3291]: time="2025-12-10T06:10:42.019001577Z" level=info msg="No systemd watchdog enabled"
	Dec 10 06:10:42 pause-257171 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	66ade9741737d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                     16 seconds ago      Running             coredns                   0                   4b569e2c432bd       coredns-66bc5c9577-t6x5x               kube-system
	f35e842647561       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11   27 seconds ago      Running             kindnet-cni               0                   694332fbe1d24       kindnet-8nqff                          kube-system
	a8135c67f5495       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                     30 seconds ago      Running             kube-proxy                0                   3334f8496ef79       kube-proxy-hd5t7                       kube-system
	107fccc521147       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                     40 seconds ago      Running             kube-controller-manager   0                   4998f7e82ae80       kube-controller-manager-pause-257171   kube-system
	0f33d39c61905       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                     40 seconds ago      Running             kube-apiserver            0                   8dd1f2c35c493       kube-apiserver-pause-257171            kube-system
	deaa3b6fcb814       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                     40 seconds ago      Running             etcd                      0                   e5a3641b6f408       etcd-pause-257171                      kube-system
	8dddd76371ac0       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                     40 seconds ago      Running             kube-scheduler            0                   7e27883dc9703       kube-scheduler-pause-257171            kube-system
	
	
	==> coredns [66ade9741737d0618d31b9e331c6d038cdc1e2bb1fd529e9541980bf68e0abec] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33250 - 34534 "HINFO IN 6074135922786494092.6906691045981847379. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.088699836s
	
	
	==> describe nodes <==
	Name:               pause-257171
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-257171
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602
	                    minikube.k8s.io/name=pause-257171
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_10_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:10:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-257171
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:10:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:10:46 +0000   Wed, 10 Dec 2025 06:10:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:10:46 +0000   Wed, 10 Dec 2025 06:10:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:10:46 +0000   Wed, 10 Dec 2025 06:10:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 06:10:46 +0000   Wed, 10 Dec 2025 06:10:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-257171
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                ed004dca-6a51-4a18-8dcd-3ac4d151217f
	  Boot ID:                    b1b789e7-29ca-41f0-9541-8c4ef16372aa
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-t6x5x                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     31s
	  kube-system                 etcd-pause-257171                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         36s
	  kube-system                 kindnet-8nqff                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-pause-257171             250m (3%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-pause-257171    200m (2%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-hd5t7                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-pause-257171             100m (1%)     0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 29s   kube-proxy       
	  Normal  Starting                 36s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s   kubelet          Node pause-257171 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s   kubelet          Node pause-257171 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s   kubelet          Node pause-257171 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           32s   node-controller  Node pause-257171 event: Registered Node pause-257171 in Controller
	  Normal  NodeReady                17s   kubelet          Node pause-257171 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.085783] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023769] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.147072] kauditd_printk_skb: 47 callbacks suppressed
	[Dec10 05:30] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 42 ae 2b 34 45 8c c6 70 9d 75 0f 8b 08 00
	[  +1.051409] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 42 ae 2b 34 45 8c c6 70 9d 75 0f 8b 08 00
	[Dec10 05:31] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 42 ae 2b 34 45 8c c6 70 9d 75 0f 8b 08 00
	[  +1.023880] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 42 ae 2b 34 45 8c c6 70 9d 75 0f 8b 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 42 ae 2b 34 45 8c c6 70 9d 75 0f 8b 08 00
	[  +1.023884] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 42 ae 2b 34 45 8c c6 70 9d 75 0f 8b 08 00
	[  +2.047781] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 42 ae 2b 34 45 8c c6 70 9d 75 0f 8b 08 00
	[  +4.031549] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 42 ae 2b 34 45 8c c6 70 9d 75 0f 8b 08 00
	[  +8.447180] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 42 ae 2b 34 45 8c c6 70 9d 75 0f 8b 08 00
	[ +16.382295] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 42 ae 2b 34 45 8c c6 70 9d 75 0f 8b 08 00
	[Dec10 05:32] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 42 ae 2b 34 45 8c c6 70 9d 75 0f 8b 08 00
	
	
	==> etcd [deaa3b6fcb814994f944b5b7e7ec3daa03eee3377299d9707ee55f5419d0fefe] <==
	{"level":"warn","ts":"2025-12-10T06:10:11.987628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:11.996345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.003548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.010045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.017382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.023654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.030812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.037174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.044979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.060200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.067025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.074056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.081500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.088774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.096346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.103382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.109854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.116397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.122874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.129938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.143293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.150914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48896","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:48896: read: connection reset by peer"}
	{"level":"warn","ts":"2025-12-10T06:10:12.159375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.167866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:12.208657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48968","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 06:10:51 up 53 min,  0 user,  load average: 5.41, 3.05, 1.99
	Linux pause-257171 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f35e84264756175001d4f7ecb61402e5f125b72604a043a8c128a061d528b9fd] <==
	I1210 06:10:23.593881       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 06:10:23.594234       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1210 06:10:23.594389       1 main.go:148] setting mtu 1500 for CNI 
	I1210 06:10:23.594404       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 06:10:23.594430       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T06:10:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 06:10:23.794303       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 06:10:23.794334       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 06:10:23.794349       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 06:10:23.794613       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 06:10:24.094841       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 06:10:24.094873       1 metrics.go:72] Registering metrics
	I1210 06:10:24.094946       1 controller.go:711] "Syncing nftables rules"
	I1210 06:10:33.798498       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 06:10:33.798560       1 main.go:301] handling current node
	I1210 06:10:43.802193       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 06:10:43.802229       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0f33d39c6190527f6d0e9ac7647ac9706f3280f63c52c18b858fc59065309e3e] <==
	I1210 06:10:12.715602       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1210 06:10:12.717134       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1210 06:10:12.717189       1 aggregator.go:171] initial CRD sync complete...
	I1210 06:10:12.717210       1 autoregister_controller.go:144] Starting autoregister controller
	I1210 06:10:12.717218       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 06:10:12.717225       1 cache.go:39] Caches are synced for autoregister controller
	I1210 06:10:12.741564       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:10:12.749466       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1210 06:10:13.608121       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1210 06:10:13.613888       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1210 06:10:13.613910       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 06:10:14.129947       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:10:14.177552       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:10:14.308430       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1210 06:10:14.313739       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1210 06:10:14.314625       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 06:10:14.318141       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:10:14.623652       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 06:10:15.373048       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 06:10:15.383236       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1210 06:10:15.390304       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1210 06:10:19.828969       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:10:19.833012       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:10:20.327629       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 06:10:20.725297       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [107fccc52114724b5c7829573ed47387d6cbba2579d253522d27aec12a9ce2af] <==
	I1210 06:10:19.624572       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1210 06:10:19.624590       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1210 06:10:19.624634       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1210 06:10:19.624660       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1210 06:10:19.624675       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1210 06:10:19.624691       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1210 06:10:19.624705       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1210 06:10:19.624861       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1210 06:10:19.626038       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1210 06:10:19.626059       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1210 06:10:19.626063       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1210 06:10:19.629519       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1210 06:10:19.629522       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 06:10:19.629541       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1210 06:10:19.629548       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1210 06:10:19.629585       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1210 06:10:19.629610       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1210 06:10:19.629617       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1210 06:10:19.629621       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1210 06:10:19.629735       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 06:10:19.630984       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 06:10:19.631051       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1210 06:10:19.643805       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-257171" podCIDRs=["10.244.0.0/24"]
	I1210 06:10:19.648096       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 06:10:34.576386       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a8135c67f54958dd4233b61e0d015aa8778759365567de664bda3e2ba8db00ab] <==
	I1210 06:10:21.136612       1 server_linux.go:53] "Using iptables proxy"
	I1210 06:10:21.234060       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 06:10:21.334562       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 06:10:21.334598       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1210 06:10:21.334692       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:10:21.353307       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:10:21.353365       1 server_linux.go:132] "Using iptables Proxier"
	I1210 06:10:21.358546       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:10:21.358882       1 server.go:527] "Version info" version="v1.34.3"
	I1210 06:10:21.358907       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:10:21.360969       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:10:21.361001       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:10:21.361126       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:10:21.361149       1 config.go:200] "Starting service config controller"
	I1210 06:10:21.361163       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:10:21.361170       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:10:21.361206       1 config.go:309] "Starting node config controller"
	I1210 06:10:21.361223       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:10:21.361231       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:10:21.462235       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 06:10:21.462266       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 06:10:21.462290       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8dddd76371ac09675b74acb4cc1233f21c564c3ec61620eb5510f2aa62d0fd76] <==
	E1210 06:10:12.676419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 06:10:12.676540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 06:10:12.676633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 06:10:12.676647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 06:10:12.676667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 06:10:12.676729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 06:10:12.676741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 06:10:12.676826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 06:10:12.676842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 06:10:12.676868       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 06:10:12.677010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 06:10:12.677228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 06:10:13.549250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 06:10:13.571591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 06:10:13.612706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 06:10:13.617779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 06:10:13.658906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 06:10:13.679678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 06:10:13.714883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 06:10:13.737959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 06:10:13.747210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 06:10:13.816841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 06:10:13.874781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1210 06:10:13.907908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1210 06:10:16.971722       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 06:10:20 pause-257171 kubelet[2360]: I1210 06:10:20.849977    2360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/afb8ed20-85e5-48ca-9b80-aba3e0f6e330-lib-modules\") pod \"kindnet-8nqff\" (UID: \"afb8ed20-85e5-48ca-9b80-aba3e0f6e330\") " pod="kube-system/kindnet-8nqff"
	Dec 10 06:10:20 pause-257171 kubelet[2360]: I1210 06:10:20.850014    2360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6m27\" (UniqueName: \"kubernetes.io/projected/5c7c8775-6a41-44e3-b6b1-7a6a2b4c4942-kube-api-access-w6m27\") pod \"kube-proxy-hd5t7\" (UID: \"5c7c8775-6a41-44e3-b6b1-7a6a2b4c4942\") " pod="kube-system/kube-proxy-hd5t7"
	Dec 10 06:10:20 pause-257171 kubelet[2360]: I1210 06:10:20.850047    2360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8m6c\" (UniqueName: \"kubernetes.io/projected/afb8ed20-85e5-48ca-9b80-aba3e0f6e330-kube-api-access-s8m6c\") pod \"kindnet-8nqff\" (UID: \"afb8ed20-85e5-48ca-9b80-aba3e0f6e330\") " pod="kube-system/kindnet-8nqff"
	Dec 10 06:10:20 pause-257171 kubelet[2360]: I1210 06:10:20.850072    2360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5c7c8775-6a41-44e3-b6b1-7a6a2b4c4942-kube-proxy\") pod \"kube-proxy-hd5t7\" (UID: \"5c7c8775-6a41-44e3-b6b1-7a6a2b4c4942\") " pod="kube-system/kube-proxy-hd5t7"
	Dec 10 06:10:20 pause-257171 kubelet[2360]: I1210 06:10:20.850122    2360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c7c8775-6a41-44e3-b6b1-7a6a2b4c4942-lib-modules\") pod \"kube-proxy-hd5t7\" (UID: \"5c7c8775-6a41-44e3-b6b1-7a6a2b4c4942\") " pod="kube-system/kube-proxy-hd5t7"
	Dec 10 06:10:20 pause-257171 kubelet[2360]: I1210 06:10:20.850147    2360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/afb8ed20-85e5-48ca-9b80-aba3e0f6e330-cni-cfg\") pod \"kindnet-8nqff\" (UID: \"afb8ed20-85e5-48ca-9b80-aba3e0f6e330\") " pod="kube-system/kindnet-8nqff"
	Dec 10 06:10:22 pause-257171 kubelet[2360]: I1210 06:10:22.120397    2360 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hd5t7" podStartSLOduration=2.1203528 podStartE2EDuration="2.1203528s" podCreationTimestamp="2025-12-10 06:10:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:10:21.272501169 +0000 UTC m=+6.151689146" watchObservedRunningTime="2025-12-10 06:10:22.1203528 +0000 UTC m=+6.999540777"
	Dec 10 06:10:24 pause-257171 kubelet[2360]: I1210 06:10:24.278435    2360 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-8nqff" podStartSLOduration=2.076586807 podStartE2EDuration="4.278418821s" podCreationTimestamp="2025-12-10 06:10:20 +0000 UTC" firstStartedPulling="2025-12-10 06:10:21.057017372 +0000 UTC m=+5.936205343" lastFinishedPulling="2025-12-10 06:10:23.258849401 +0000 UTC m=+8.138037357" observedRunningTime="2025-12-10 06:10:24.278216079 +0000 UTC m=+9.157404055" watchObservedRunningTime="2025-12-10 06:10:24.278418821 +0000 UTC m=+9.157606797"
	Dec 10 06:10:34 pause-257171 kubelet[2360]: I1210 06:10:34.365662    2360 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 10 06:10:34 pause-257171 kubelet[2360]: I1210 06:10:34.445848    2360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b893f947-02d7-41b1-9886-9b0830ddf69c-config-volume\") pod \"coredns-66bc5c9577-t6x5x\" (UID: \"b893f947-02d7-41b1-9886-9b0830ddf69c\") " pod="kube-system/coredns-66bc5c9577-t6x5x"
	Dec 10 06:10:34 pause-257171 kubelet[2360]: I1210 06:10:34.446046    2360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt256\" (UniqueName: \"kubernetes.io/projected/b893f947-02d7-41b1-9886-9b0830ddf69c-kube-api-access-lt256\") pod \"coredns-66bc5c9577-t6x5x\" (UID: \"b893f947-02d7-41b1-9886-9b0830ddf69c\") " pod="kube-system/coredns-66bc5c9577-t6x5x"
	Dec 10 06:10:35 pause-257171 kubelet[2360]: I1210 06:10:35.317406    2360 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-t6x5x" podStartSLOduration=15.317387625 podStartE2EDuration="15.317387625s" podCreationTimestamp="2025-12-10 06:10:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:10:35.310261489 +0000 UTC m=+20.189449466" watchObservedRunningTime="2025-12-10 06:10:35.317387625 +0000 UTC m=+20.196575602"
	Dec 10 06:10:39 pause-257171 kubelet[2360]: W1210 06:10:39.233314    2360 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 10 06:10:39 pause-257171 kubelet[2360]: E1210 06:10:39.233421    2360 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Dec 10 06:10:39 pause-257171 kubelet[2360]: E1210 06:10:39.233522    2360 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 10 06:10:39 pause-257171 kubelet[2360]: E1210 06:10:39.233545    2360 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 10 06:10:39 pause-257171 kubelet[2360]: E1210 06:10:39.233562    2360 kubelet.go:2614] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 10 06:10:39 pause-257171 kubelet[2360]: E1210 06:10:39.301093    2360 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 10 06:10:39 pause-257171 kubelet[2360]: E1210 06:10:39.301151    2360 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 10 06:10:39 pause-257171 kubelet[2360]: E1210 06:10:39.301167    2360 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 10 06:10:39 pause-257171 kubelet[2360]: W1210 06:10:39.334369    2360 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 10 06:10:46 pause-257171 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 06:10:46 pause-257171 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 06:10:46 pause-257171 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:10:46 pause-257171 systemd[1]: kubelet.service: Consumed 1.291s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-257171 -n pause-257171
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-257171 -n pause-257171: exit status 2 (405.940427ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-257171 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-725426 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-725426 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (263.190916ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:13:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-725426 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-725426 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-725426 describe deploy/metrics-server -n kube-system: exit status 1 (62.881246ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-725426 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-725426
helpers_test.go:244: (dbg) docker inspect old-k8s-version-725426:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "565a7417ad854a7ed35617a365dca829fa25f8d3be3eaf17a40b74828ab57ef1",
	        "Created": "2025-12-10T06:12:38.650542481Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 347347,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:12:38.819214426Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/565a7417ad854a7ed35617a365dca829fa25f8d3be3eaf17a40b74828ab57ef1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/565a7417ad854a7ed35617a365dca829fa25f8d3be3eaf17a40b74828ab57ef1/hostname",
	        "HostsPath": "/var/lib/docker/containers/565a7417ad854a7ed35617a365dca829fa25f8d3be3eaf17a40b74828ab57ef1/hosts",
	        "LogPath": "/var/lib/docker/containers/565a7417ad854a7ed35617a365dca829fa25f8d3be3eaf17a40b74828ab57ef1/565a7417ad854a7ed35617a365dca829fa25f8d3be3eaf17a40b74828ab57ef1-json.log",
	        "Name": "/old-k8s-version-725426",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-725426:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-725426",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "565a7417ad854a7ed35617a365dca829fa25f8d3be3eaf17a40b74828ab57ef1",
	                "LowerDir": "/var/lib/docker/overlay2/7b40167ff82cc93446db6a2604157bc0e7ce9a2383a73efb4fa6f25634d0e151-init/diff:/var/lib/docker/overlay2/b62e2f8db4877fd6b32453256d2aeab173581bfdfbed6c87a5c3b6dd49dbb983/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7b40167ff82cc93446db6a2604157bc0e7ce9a2383a73efb4fa6f25634d0e151/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7b40167ff82cc93446db6a2604157bc0e7ce9a2383a73efb4fa6f25634d0e151/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7b40167ff82cc93446db6a2604157bc0e7ce9a2383a73efb4fa6f25634d0e151/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-725426",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-725426/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-725426",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-725426",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-725426",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "af75eb48b576ce76f8836e381df86672fc5a7ffefa8480015444a5af9be58eca",
	            "SandboxKey": "/var/run/docker/netns/af75eb48b576",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-725426": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b1ead66c643ddb232e3817c16b9e356f55b33d7d7d004331db07c60da2882eda",
	                    "EndpointID": "5953fd51044f25d016d1c9a9630eb468d223a01cc807c25bfd8404884cddbd0a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "e6:1a:47:80:ca:69",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-725426",
	                        "565a7417ad85"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-725426 -n old-k8s-version-725426
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-725426 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-725426 logs -n 25: (1.283770301s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p enable-default-cni-094798 sudo cat /lib/systemd/system/containerd.service                                                                                 │ enable-default-cni-094798 │ jenkins │ v1.37.0 │ 10 Dec 25 06:13 UTC │ 10 Dec 25 06:13 UTC │
	│ ssh     │ -p enable-default-cni-094798 sudo cat /etc/containerd/config.toml                                                                                            │ enable-default-cni-094798 │ jenkins │ v1.37.0 │ 10 Dec 25 06:13 UTC │ 10 Dec 25 06:13 UTC │
	│ ssh     │ -p enable-default-cni-094798 sudo containerd config dump                                                                                                     │ enable-default-cni-094798 │ jenkins │ v1.37.0 │ 10 Dec 25 06:13 UTC │ 10 Dec 25 06:13 UTC │
	│ ssh     │ -p enable-default-cni-094798 sudo systemctl status crio --all --full --no-pager                                                                              │ enable-default-cni-094798 │ jenkins │ v1.37.0 │ 10 Dec 25 06:13 UTC │ 10 Dec 25 06:13 UTC │
	│ ssh     │ -p enable-default-cni-094798 sudo systemctl cat crio --no-pager                                                                                              │ enable-default-cni-094798 │ jenkins │ v1.37.0 │ 10 Dec 25 06:13 UTC │ 10 Dec 25 06:13 UTC │
	│ ssh     │ -p enable-default-cni-094798 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                    │ enable-default-cni-094798 │ jenkins │ v1.37.0 │ 10 Dec 25 06:13 UTC │ 10 Dec 25 06:13 UTC │
	│ ssh     │ -p enable-default-cni-094798 sudo crio config                                                                                                                │ enable-default-cni-094798 │ jenkins │ v1.37.0 │ 10 Dec 25 06:13 UTC │ 10 Dec 25 06:13 UTC │
	│ delete  │ -p enable-default-cni-094798                                                                                                                                 │ enable-default-cni-094798 │ jenkins │ v1.37.0 │ 10 Dec 25 06:13 UTC │ 10 Dec 25 06:13 UTC │
	│ ssh     │ -p flannel-094798 sudo cat /etc/nsswitch.conf                                                                                                                │ flannel-094798            │ jenkins │ v1.37.0 │ 10 Dec 25 06:13 UTC │ 10 Dec 25 06:13 UTC │
	│ ssh     │ -p flannel-094798 sudo cat /etc/hosts                                                                                                                        │ flannel-094798            │ jenkins │ v1.37.0 │ 10 Dec 25 06:13 UTC │ 10 Dec 25 06:13 UTC │
	│ start   │ -p no-preload-468539 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ no-preload-468539         │ jenkins │ v1.37.0 │ 10 Dec 25 06:13 UTC │                     │
	│ ssh     │ -p flannel-094798 sudo cat /etc/resolv.conf                                                                                                                  │ flannel-094798            │ jenkins │ v1.37.0 │ 10 Dec 25 06:13 UTC │ 10 Dec 25 06:13 UTC │
	│ ssh     │ -p flannel-094798 sudo crictl pods                                                                                                                           │ flannel-094798            │ jenkins │ v1.37.0 │ 10 Dec 25 06:13 UTC │ 10 Dec 25 06:13 UTC │
	│ ssh     │ -p flannel-094798 sudo crictl ps --all                                                                                                                       │ flannel-094798            │ jenkins │ v1.37.0 │ 10 Dec 25 06:13 UTC │ 10 Dec 25 06:13 UTC │
	│ ssh     │ -p flannel-094798 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                                                │ flannel-094798            │ jenkins │ v1.37.0 │ 10 Dec 25 06:13 UTC │ 10 Dec 25 06:13 UTC │
	│ ssh     │ -p flannel-094798 sudo ip a s                                                                                                                                │ flannel-094798            │ jenkins │ v1.37.0 │ 10 Dec 25 06:13 UTC │ 10 Dec 25 06:13 UTC │
	│ ssh     │ -p flannel-094798 sudo ip r s                                                                                                                                │ flannel-094798            │ jenkins │ v1.37.0 │ 10 Dec 25 06:13 UTC │ 10 Dec 25 06:13 UTC │
	│ ssh     │ -p flannel-094798 sudo iptables-save                                                                                                                         │ flannel-094798            │ jenkins │ v1.37.0 │ 10 Dec 25 06:13 UTC │ 10 Dec 25 06:13 UTC │
	│ ssh     │ -p flannel-094798 sudo iptables -t nat -L -n -v                                                                                                              │ flannel-094798            │ jenkins │ v1.37.0 │ 10 Dec 25 06:13 UTC │ 10 Dec 25 06:13 UTC │
	│ ssh     │ -p flannel-094798 sudo cat /run/flannel/subnet.env                                                                                                           │ flannel-094798            │ jenkins │ v1.37.0 │ 10 Dec 25 06:13 UTC │ 10 Dec 25 06:13 UTC │
	│ ssh     │ -p flannel-094798 sudo cat /etc/kube-flannel/cni-conf.json                                                                                                   │ flannel-094798            │ jenkins │ v1.37.0 │ 10 Dec 25 06:13 UTC │                     │
	│ ssh     │ -p flannel-094798 sudo systemctl status kubelet --all --full --no-pager                                                                                      │ flannel-094798            │ jenkins │ v1.37.0 │ 10 Dec 25 06:13 UTC │ 10 Dec 25 06:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-725426 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                 │ old-k8s-version-725426    │ jenkins │ v1.37.0 │ 10 Dec 25 06:13 UTC │                     │
	│ ssh     │ -p flannel-094798 sudo systemctl cat kubelet --no-pager                                                                                                      │ flannel-094798            │ jenkins │ v1.37.0 │ 10 Dec 25 06:13 UTC │ 10 Dec 25 06:13 UTC │
	│ ssh     │ -p flannel-094798 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                       │ flannel-094798            │ jenkins │ v1.37.0 │ 10 Dec 25 06:13 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:13:25
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:13:25.767132  358054 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:13:25.767255  358054 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:13:25.767266  358054 out.go:374] Setting ErrFile to fd 2...
	I1210 06:13:25.767274  358054 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:13:25.767499  358054 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 06:13:25.767941  358054 out.go:368] Setting JSON to false
	I1210 06:13:25.769278  358054 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3350,"bootTime":1765343856,"procs":433,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:13:25.769327  358054 start.go:143] virtualization: kvm guest
	I1210 06:13:25.771092  358054 out.go:179] * [no-preload-468539] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:13:25.772259  358054 notify.go:221] Checking for updates...
	I1210 06:13:25.772271  358054 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:13:25.773474  358054 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:13:25.774762  358054 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:13:25.776115  358054 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube
	I1210 06:13:25.777372  358054 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:13:25.778553  358054 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:13:25.780019  358054 config.go:182] Loaded profile config "bridge-094798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:13:25.780142  358054 config.go:182] Loaded profile config "flannel-094798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:13:25.780261  358054 config.go:182] Loaded profile config "old-k8s-version-725426": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1210 06:13:25.780394  358054 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:13:25.804505  358054 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:13:25.804650  358054 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:13:25.864560  358054 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-10 06:13:25.854643599 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:13:25.864659  358054 docker.go:319] overlay module found
	I1210 06:13:25.866108  358054 out.go:179] * Using the docker driver based on user configuration
	I1210 06:13:25.867729  358054 start.go:309] selected driver: docker
	I1210 06:13:25.867747  358054 start.go:927] validating driver "docker" against <nil>
	I1210 06:13:25.867761  358054 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:13:25.868344  358054 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:13:25.931382  358054 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-10 06:13:25.919770578 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:13:25.931704  358054 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 06:13:25.931960  358054 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:13:25.933593  358054 out.go:179] * Using Docker driver with root privileges
	I1210 06:13:25.934994  358054 cni.go:84] Creating CNI manager for ""
	I1210 06:13:25.935073  358054 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:13:25.935127  358054 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 06:13:25.935211  358054 start.go:353] cluster config:
	{Name:no-preload-468539 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-468539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:13:25.936617  358054 out.go:179] * Starting "no-preload-468539" primary control-plane node in "no-preload-468539" cluster
	I1210 06:13:25.937858  358054 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:13:25.938967  358054 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:13:25.939984  358054 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1210 06:13:25.940099  358054 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 06:13:25.940167  358054 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/no-preload-468539/config.json ...
	I1210 06:13:25.940204  358054 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/no-preload-468539/config.json: {Name:mk8a71866f5e32a91de83255db5391a6a5ea56a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:13:25.940420  358054 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 06:13:25.964293  358054 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:13:25.964310  358054 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 06:13:25.964340  358054 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:13:25.964377  358054 start.go:360] acquireMachinesLock for no-preload-468539: {Name:mkf25110bcf822b894cb65642adeaf2352263d1e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:13:25.964484  358054 start.go:364] duration metric: took 90.199µs to acquireMachinesLock for "no-preload-468539"
	I1210 06:13:25.964507  358054 start.go:93] Provisioning new machine with config: &{Name:no-preload-468539 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-468539 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:13:25.964606  358054 start.go:125] createHost starting for "" (driver="docker")
	W1210 06:13:22.755230  340530 pod_ready.go:104] pod "coredns-66bc5c9577-q4lng" is not "Ready", error: <nil>
	W1210 06:13:25.255224  340530 pod_ready.go:104] pod "coredns-66bc5c9577-q4lng" is not "Ready", error: <nil>
	I1210 06:13:25.966714  358054 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 06:13:25.966952  358054 start.go:159] libmachine.API.Create for "no-preload-468539" (driver="docker")
	I1210 06:13:25.966993  358054 client.go:173] LocalClient.Create starting
	I1210 06:13:25.967074  358054 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem
	I1210 06:13:25.967131  358054 main.go:143] libmachine: Decoding PEM data...
	I1210 06:13:25.967150  358054 main.go:143] libmachine: Parsing certificate...
	I1210 06:13:25.967211  358054 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem
	I1210 06:13:25.967240  358054 main.go:143] libmachine: Decoding PEM data...
	I1210 06:13:25.967259  358054 main.go:143] libmachine: Parsing certificate...
	I1210 06:13:25.967632  358054 cli_runner.go:164] Run: docker network inspect no-preload-468539 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 06:13:25.986115  358054 cli_runner.go:211] docker network inspect no-preload-468539 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 06:13:25.986183  358054 network_create.go:284] running [docker network inspect no-preload-468539] to gather additional debugging logs...
	I1210 06:13:25.986200  358054 cli_runner.go:164] Run: docker network inspect no-preload-468539
	W1210 06:13:26.004034  358054 cli_runner.go:211] docker network inspect no-preload-468539 returned with exit code 1
	I1210 06:13:26.004087  358054 network_create.go:287] error running [docker network inspect no-preload-468539]: docker network inspect no-preload-468539: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-468539 not found
	I1210 06:13:26.004104  358054 network_create.go:289] output of [docker network inspect no-preload-468539]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-468539 not found
	
	** /stderr **
	I1210 06:13:26.004257  358054 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:13:26.023266  358054 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9ebf62c95cf7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d2:a8:ac:6e:16:1a} reservation:<nil>}
	I1210 06:13:26.023992  358054 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ad22705e186e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:52:8a:92:75:2c:7b} reservation:<nil>}
	I1210 06:13:26.024810  358054 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-782a6994f202 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3e:35:84:e8:81:18} reservation:<nil>}
	I1210 06:13:26.025466  358054 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b1ead66c643d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:fe:c5:90:28:3d:ff} reservation:<nil>}
	I1210 06:13:26.026012  358054 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-f113aca6b913 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:c2:4e:45:12:21:54} reservation:<nil>}
	I1210 06:13:26.026925  358054 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f44450}
	I1210 06:13:26.026949  358054 network_create.go:124] attempt to create docker network no-preload-468539 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1210 06:13:26.026998  358054 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-468539 no-preload-468539
	I1210 06:13:26.075576  358054 network_create.go:108] docker network no-preload-468539 192.168.94.0/24 created
	I1210 06:13:26.075602  358054 kic.go:121] calculated static IP "192.168.94.2" for the "no-preload-468539" container
	I1210 06:13:26.075677  358054 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 06:13:26.092997  358054 cli_runner.go:164] Run: docker volume create no-preload-468539 --label name.minikube.sigs.k8s.io=no-preload-468539 --label created_by.minikube.sigs.k8s.io=true
	I1210 06:13:26.105369  358054 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 06:13:26.112472  358054 oci.go:103] Successfully created a docker volume no-preload-468539
	I1210 06:13:26.112546  358054 cli_runner.go:164] Run: docker run --rm --name no-preload-468539-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-468539 --entrypoint /usr/bin/test -v no-preload-468539:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 06:13:26.254030  358054 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 06:13:26.403727  358054 cache.go:107] acquiring lock: {Name:mk0763a50664c56b0862900e71862307cba94d4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:13:26.403753  358054 cache.go:107] acquiring lock: {Name:mk1e61937bbcbe456972ee92ce51441d0a310af5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:13:26.403786  358054 cache.go:107] acquiring lock: {Name:mkd670cede0997c7eb0e9bd388a82e1cb2741031 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:13:26.403811  358054 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:13:26.403819  358054 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 106.577µs
	I1210 06:13:26.403829  358054 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:13:26.403839  358054 cache.go:107] acquiring lock: {Name:mkfaee1dcd6a6f37ecb9d19fcd839a5a6d9b20e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:13:26.403868  358054 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:13:26.403874  358054 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 89.509µs
	I1210 06:13:26.403882  358054 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:13:26.403868  358054 cache.go:107] acquiring lock: {Name:mk76394a7d1abe4be60a9e73a4b33f52c38d5e6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:13:26.403893  358054 cache.go:107] acquiring lock: {Name:mk1df93d14c27f679df68c721474a110ecfc043b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:13:26.403943  358054 cache.go:107] acquiring lock: {Name:mke4d7efb2ee4879b97924080e0d429a33c1d765 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:13:26.403964  358054 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:13:26.403973  358054 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:13:26.403971  358054 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:13:26.404029  358054 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:13:26.404125  358054 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:13:26.403760  358054 cache.go:107] acquiring lock: {Name:mk615200abc7eac862a5e41cd77ae4b62bf451cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:13:26.404488  358054 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1210 06:13:26.405263  358054 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:13:26.405465  358054 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:13:26.405481  358054 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:13:26.405491  358054 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1210 06:13:26.405476  358054 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:13:26.405558  358054 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:13:26.526630  358054 oci.go:107] Successfully prepared a docker volume no-preload-468539
	I1210 06:13:26.526672  358054 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	W1210 06:13:26.526748  358054 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1210 06:13:26.526788  358054 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1210 06:13:26.526829  358054 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 06:13:26.553721  358054 cache.go:162] opening:  /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1210 06:13:26.557288  358054 cache.go:162] opening:  /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1210 06:13:26.560819  358054 cache.go:162] opening:  /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1210 06:13:26.563995  358054 cache.go:162] opening:  /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0
	I1210 06:13:26.575757  358054 cache.go:162] opening:  /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1210 06:13:26.577576  358054 cache.go:162] opening:  /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1210 06:13:26.590172  358054 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-468539 --name no-preload-468539 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-468539 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-468539 --network no-preload-468539 --ip 192.168.94.2 --volume no-preload-468539:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 06:13:26.852032  358054 cache.go:157] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 06:13:26.852059  358054 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 448.319735ms
	I1210 06:13:26.852071  358054 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 06:13:26.911222  358054 cli_runner.go:164] Run: docker container inspect no-preload-468539 --format={{.State.Running}}
	I1210 06:13:26.934157  358054 cli_runner.go:164] Run: docker container inspect no-preload-468539 --format={{.State.Status}}
	I1210 06:13:26.956227  358054 cli_runner.go:164] Run: docker exec no-preload-468539 stat /var/lib/dpkg/alternatives/iptables
	I1210 06:13:27.007156  358054 oci.go:144] the created container "no-preload-468539" has a running status.
	I1210 06:13:27.007191  358054 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22094-5725/.minikube/machines/no-preload-468539/id_rsa...
	I1210 06:13:27.105433  358054 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22094-5725/.minikube/machines/no-preload-468539/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 06:13:27.131954  358054 cli_runner.go:164] Run: docker container inspect no-preload-468539 --format={{.State.Status}}
	I1210 06:13:27.159896  358054 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 06:13:27.159919  358054 kic_runner.go:114] Args: [docker exec --privileged no-preload-468539 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 06:13:27.211957  358054 cli_runner.go:164] Run: docker container inspect no-preload-468539 --format={{.State.Status}}
	I1210 06:13:27.240447  358054 machine.go:94] provisionDockerMachine start ...
	I1210 06:13:27.240549  358054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-468539
	I1210 06:13:27.266187  358054 main.go:143] libmachine: Using SSH client type: native
	I1210 06:13:27.282946  358054 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1210 06:13:27.282970  358054 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:13:27.427194  358054 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-468539
	
	I1210 06:13:27.427224  358054 ubuntu.go:182] provisioning hostname "no-preload-468539"
	I1210 06:13:27.427291  358054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-468539
	I1210 06:13:27.449791  358054 main.go:143] libmachine: Using SSH client type: native
	I1210 06:13:27.450112  358054 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1210 06:13:27.450137  358054 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-468539 && echo "no-preload-468539" | sudo tee /etc/hostname
	I1210 06:13:27.608282  358054 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-468539
	
	I1210 06:13:27.608363  358054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-468539
	I1210 06:13:27.634089  358054 main.go:143] libmachine: Using SSH client type: native
	I1210 06:13:27.634405  358054 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1210 06:13:27.634432  358054 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-468539' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-468539/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-468539' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:13:27.776655  358054 cache.go:157] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 06:13:27.776684  358054 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 1.37282059s
	I1210 06:13:27.776699  358054 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 06:13:27.802760  358054 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:13:27.802795  358054 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-5725/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-5725/.minikube}
	I1210 06:13:27.802843  358054 ubuntu.go:190] setting up certificates
	I1210 06:13:27.802861  358054 provision.go:84] configureAuth start
	I1210 06:13:27.802918  358054 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-468539
	I1210 06:13:27.825034  358054 provision.go:143] copyHostCerts
	I1210 06:13:27.825105  358054 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem, removing ...
	I1210 06:13:27.825124  358054 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem
	I1210 06:13:27.825202  358054 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem (1078 bytes)
	I1210 06:13:27.825333  358054 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem, removing ...
	I1210 06:13:27.825341  358054 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem
	I1210 06:13:27.825397  358054 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem (1123 bytes)
	I1210 06:13:27.825482  358054 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem, removing ...
	I1210 06:13:27.825488  358054 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem
	I1210 06:13:27.825523  358054 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem (1679 bytes)
	I1210 06:13:27.825585  358054 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem org=jenkins.no-preload-468539 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-468539]
	I1210 06:13:27.857433  358054 cache.go:157] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 06:13:27.857463  358054 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.453623358s
	I1210 06:13:27.857478  358054 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 06:13:27.858368  358054 cache.go:157] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 06:13:27.858390  358054 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 1.454453157s
	I1210 06:13:27.858402  358054 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 06:13:27.906493  358054 cache.go:157] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 06:13:27.906515  358054 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 1.502651817s
	I1210 06:13:27.906525  358054 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 06:13:27.908182  358054 cache.go:157] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 06:13:27.908204  358054 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0" took 1.504452852s
	I1210 06:13:27.908213  358054 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 06:13:27.908226  358054 cache.go:87] Successfully saved all images to host disk.
	I1210 06:13:27.937107  358054 provision.go:177] copyRemoteCerts
	I1210 06:13:27.937156  358054 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:13:27.937194  358054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-468539
	I1210 06:13:27.956605  358054 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/no-preload-468539/id_rsa Username:docker}
	I1210 06:13:28.054897  358054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:13:28.078015  358054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:13:28.095710  358054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:13:28.113639  358054 provision.go:87] duration metric: took 310.756348ms to configureAuth
	I1210 06:13:28.113662  358054 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:13:28.113890  358054 config.go:182] Loaded profile config "no-preload-468539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:13:28.114016  358054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-468539
	I1210 06:13:28.131669  358054 main.go:143] libmachine: Using SSH client type: native
	I1210 06:13:28.131982  358054 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1210 06:13:28.132006  358054 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:13:28.405816  358054 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:13:28.405846  358054 machine.go:97] duration metric: took 1.165380108s to provisionDockerMachine
	I1210 06:13:28.405856  358054 client.go:176] duration metric: took 2.438853413s to LocalClient.Create
	I1210 06:13:28.405874  358054 start.go:167] duration metric: took 2.438924104s to libmachine.API.Create "no-preload-468539"
	I1210 06:13:28.405885  358054 start.go:293] postStartSetup for "no-preload-468539" (driver="docker")
	I1210 06:13:28.405898  358054 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:13:28.405954  358054 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:13:28.405996  358054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-468539
	I1210 06:13:28.424061  358054 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/no-preload-468539/id_rsa Username:docker}
	I1210 06:13:28.520235  358054 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:13:28.523753  358054 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:13:28.523790  358054 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:13:28.523803  358054 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/addons for local assets ...
	I1210 06:13:28.523857  358054 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/files for local assets ...
	I1210 06:13:28.523954  358054 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem -> 92532.pem in /etc/ssl/certs
	I1210 06:13:28.524072  358054 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:13:28.532257  358054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:13:28.552521  358054 start.go:296] duration metric: took 146.619459ms for postStartSetup
	I1210 06:13:28.552900  358054 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-468539
	I1210 06:13:28.571372  358054 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/no-preload-468539/config.json ...
	I1210 06:13:28.571629  358054 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:13:28.571679  358054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-468539
	I1210 06:13:28.592173  358054 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/no-preload-468539/id_rsa Username:docker}
	I1210 06:13:28.688197  358054 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:13:28.692755  358054 start.go:128] duration metric: took 2.728131823s to createHost
	I1210 06:13:28.692780  358054 start.go:83] releasing machines lock for "no-preload-468539", held for 2.72828336s
	I1210 06:13:28.692859  358054 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-468539
	I1210 06:13:28.710753  358054 ssh_runner.go:195] Run: cat /version.json
	I1210 06:13:28.710806  358054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-468539
	I1210 06:13:28.710871  358054 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:13:28.710940  358054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-468539
	I1210 06:13:28.729855  358054 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/no-preload-468539/id_rsa Username:docker}
	I1210 06:13:28.730189  358054 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/no-preload-468539/id_rsa Username:docker}
	I1210 06:13:28.888813  358054 ssh_runner.go:195] Run: systemctl --version
	I1210 06:13:28.896309  358054 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:13:28.935183  358054 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:13:28.939942  358054 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:13:28.940001  358054 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:13:28.969439  358054 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 06:13:28.969461  358054 start.go:496] detecting cgroup driver to use...
	I1210 06:13:28.969492  358054 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:13:28.969545  358054 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:13:28.985618  358054 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:13:28.997057  358054 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:13:28.997131  358054 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:13:29.013832  358054 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:13:29.031827  358054 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:13:29.115414  358054 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:13:29.210295  358054 docker.go:234] disabling docker service ...
	I1210 06:13:29.210366  358054 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:13:29.231508  358054 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:13:29.244201  358054 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:13:29.341123  358054 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:13:29.432807  358054 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:13:29.447240  358054 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:13:29.463585  358054 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 06:13:29.617403  358054 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:13:29.617462  358054 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:13:29.628098  358054 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:13:29.628159  358054 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:13:29.637294  358054 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:13:29.647107  358054 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:13:29.656500  358054 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:13:29.665810  358054 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:13:29.674887  358054 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:13:29.690090  358054 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:13:29.699116  358054 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:13:29.707176  358054 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:13:29.714835  358054 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:13:29.818719  358054 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:13:30.200997  358054 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:13:30.201063  358054 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:13:30.206148  358054 start.go:564] Will wait 60s for crictl version
	I1210 06:13:30.206208  358054 ssh_runner.go:195] Run: which crictl
	I1210 06:13:30.210281  358054 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:13:30.237438  358054 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:13:30.237530  358054 ssh_runner.go:195] Run: crio --version
	I1210 06:13:30.277192  358054 ssh_runner.go:195] Run: crio --version
	I1210 06:13:30.311620  358054 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1210 06:13:30.312722  358054 cli_runner.go:164] Run: docker network inspect no-preload-468539 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:13:30.331826  358054 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1210 06:13:30.336157  358054 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:13:30.346363  358054 kubeadm.go:884] updating cluster {Name:no-preload-468539 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-468539 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:13:30.346525  358054 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 06:13:30.486157  358054 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 06:13:30.637804  358054 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	
	
	==> CRI-O <==
	Dec 10 06:13:19 old-k8s-version-725426 crio[772]: time="2025-12-10T06:13:19.195517266Z" level=info msg="Starting container: abc44b341fbeadc881bbe8cf9dbed687c655d03ca34b814b699024e5d1996061" id=8fcbeda9-65bd-43e0-a0d2-6413eb49a392 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:13:19 old-k8s-version-725426 crio[772]: time="2025-12-10T06:13:19.197790799Z" level=info msg="Started container" PID=2145 containerID=abc44b341fbeadc881bbe8cf9dbed687c655d03ca34b814b699024e5d1996061 description=kube-system/coredns-5dd5756b68-vxb6d/coredns id=8fcbeda9-65bd-43e0-a0d2-6413eb49a392 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c8be3d37961f8d5ddf5a65862dfd5d2474e2561a9a380782dae07c0ac2e06643
	Dec 10 06:13:22 old-k8s-version-725426 crio[772]: time="2025-12-10T06:13:22.33691777Z" level=info msg="Running pod sandbox: default/busybox/POD" id=2dc72794-6d13-4b00-b676-742f2994c5b7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:13:22 old-k8s-version-725426 crio[772]: time="2025-12-10T06:13:22.336997301Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:13:22 old-k8s-version-725426 crio[772]: time="2025-12-10T06:13:22.342734011Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ad756c7d49a03df5993934a59d56b55f606f2eb25318939ba3d3146b9ae9c410 UID:afc89bbc-2505-4919-a0eb-647322d563cc NetNS:/var/run/netns/789153e7-d003-4b8a-9936-4ab2bd25e215 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00079a9d8}] Aliases:map[]}"
	Dec 10 06:13:22 old-k8s-version-725426 crio[772]: time="2025-12-10T06:13:22.342777092Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 10 06:13:22 old-k8s-version-725426 crio[772]: time="2025-12-10T06:13:22.352797782Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ad756c7d49a03df5993934a59d56b55f606f2eb25318939ba3d3146b9ae9c410 UID:afc89bbc-2505-4919-a0eb-647322d563cc NetNS:/var/run/netns/789153e7-d003-4b8a-9936-4ab2bd25e215 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00079a9d8}] Aliases:map[]}"
	Dec 10 06:13:22 old-k8s-version-725426 crio[772]: time="2025-12-10T06:13:22.352971363Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 10 06:13:22 old-k8s-version-725426 crio[772]: time="2025-12-10T06:13:22.35390823Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 06:13:22 old-k8s-version-725426 crio[772]: time="2025-12-10T06:13:22.354952623Z" level=info msg="Ran pod sandbox ad756c7d49a03df5993934a59d56b55f606f2eb25318939ba3d3146b9ae9c410 with infra container: default/busybox/POD" id=2dc72794-6d13-4b00-b676-742f2994c5b7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:13:22 old-k8s-version-725426 crio[772]: time="2025-12-10T06:13:22.356032916Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=77f68c56-d471-4d79-bfc0-5fd0568819ca name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:13:22 old-k8s-version-725426 crio[772]: time="2025-12-10T06:13:22.356171267Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=77f68c56-d471-4d79-bfc0-5fd0568819ca name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:13:22 old-k8s-version-725426 crio[772]: time="2025-12-10T06:13:22.356206463Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=77f68c56-d471-4d79-bfc0-5fd0568819ca name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:13:22 old-k8s-version-725426 crio[772]: time="2025-12-10T06:13:22.356690987Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=95820529-8e04-43ed-8e70-89ccae112c2e name=/runtime.v1.ImageService/PullImage
	Dec 10 06:13:22 old-k8s-version-725426 crio[772]: time="2025-12-10T06:13:22.358303254Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 10 06:13:22 old-k8s-version-725426 crio[772]: time="2025-12-10T06:13:22.959670476Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=95820529-8e04-43ed-8e70-89ccae112c2e name=/runtime.v1.ImageService/PullImage
	Dec 10 06:13:22 old-k8s-version-725426 crio[772]: time="2025-12-10T06:13:22.960945778Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a3850715-ba26-40b3-8049-2db86811800e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:13:22 old-k8s-version-725426 crio[772]: time="2025-12-10T06:13:22.963040162Z" level=info msg="Creating container: default/busybox/busybox" id=0e43774e-333b-4b99-8c0f-47e21ea3b1cb name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:13:22 old-k8s-version-725426 crio[772]: time="2025-12-10T06:13:22.963235035Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:13:22 old-k8s-version-725426 crio[772]: time="2025-12-10T06:13:22.968096982Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:13:22 old-k8s-version-725426 crio[772]: time="2025-12-10T06:13:22.968510428Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:13:23 old-k8s-version-725426 crio[772]: time="2025-12-10T06:13:23.001059058Z" level=info msg="Created container ac204ef9b2c1e066a6144832d60590843b6887b0fba55fbf6495c0906c016104: default/busybox/busybox" id=0e43774e-333b-4b99-8c0f-47e21ea3b1cb name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:13:23 old-k8s-version-725426 crio[772]: time="2025-12-10T06:13:23.001703559Z" level=info msg="Starting container: ac204ef9b2c1e066a6144832d60590843b6887b0fba55fbf6495c0906c016104" id=5c735a10-3b40-4c17-99c6-05dc31047119 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:13:23 old-k8s-version-725426 crio[772]: time="2025-12-10T06:13:23.003564785Z" level=info msg="Started container" PID=2224 containerID=ac204ef9b2c1e066a6144832d60590843b6887b0fba55fbf6495c0906c016104 description=default/busybox/busybox id=5c735a10-3b40-4c17-99c6-05dc31047119 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ad756c7d49a03df5993934a59d56b55f606f2eb25318939ba3d3146b9ae9c410
	Dec 10 06:13:30 old-k8s-version-725426 crio[772]: time="2025-12-10T06:13:30.127418836Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	ac204ef9b2c1e       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   ad756c7d49a03       busybox                                          default
	abc44b341fbea       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      12 seconds ago      Running             coredns                   0                   c8be3d37961f8       coredns-5dd5756b68-vxb6d                         kube-system
	0e444bed40a4b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   7993f1ed73379       storage-provisioner                              kube-system
	d434f05b928b5       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   c09f30d38a32a       kindnet-5zsjn                                    kube-system
	cdd161475197d       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      25 seconds ago      Running             kube-proxy                0                   46e332d8df945       kube-proxy-m59j8                                 kube-system
	6f46f746227e8       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      43 seconds ago      Running             kube-scheduler            0                   fdbb8cfea12f5       kube-scheduler-old-k8s-version-725426            kube-system
	dc3fc552197d8       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      43 seconds ago      Running             kube-controller-manager   0                   e687f9f007994       kube-controller-manager-old-k8s-version-725426   kube-system
	236ba40d0f857       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      43 seconds ago      Running             kube-apiserver            0                   e3d5aa9449982       kube-apiserver-old-k8s-version-725426            kube-system
	2ca095f2204e0       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      43 seconds ago      Running             etcd                      0                   bc43aa95fb7dd       etcd-old-k8s-version-725426                      kube-system
	
	
	==> coredns [abc44b341fbeadc881bbe8cf9dbed687c655d03ca34b814b699024e5d1996061] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:48352 - 57778 "HINFO IN 1550128415481554949.3395314337334442544. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.879892749s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-725426
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-725426
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602
	                    minikube.k8s.io/name=old-k8s-version-725426
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_12_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:12:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-725426
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:13:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:13:24 +0000   Wed, 10 Dec 2025 06:12:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:13:24 +0000   Wed, 10 Dec 2025 06:12:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:13:24 +0000   Wed, 10 Dec 2025 06:12:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 06:13:24 +0000   Wed, 10 Dec 2025 06:13:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-725426
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                b7d5f572-8473-408c-855f-67c8fb07b4fa
	  Boot ID:                    b1b789e7-29ca-41f0-9541-8c4ef16372aa
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-5dd5756b68-vxb6d                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-old-k8s-version-725426                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         38s
	  kube-system                 kindnet-5zsjn                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-old-k8s-version-725426             250m (3%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-controller-manager-old-k8s-version-725426    200m (2%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-m59j8                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-old-k8s-version-725426             100m (1%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  Starting                 44s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  44s (x8 over 44s)  kubelet          Node old-k8s-version-725426 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    44s (x8 over 44s)  kubelet          Node old-k8s-version-725426 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     44s (x8 over 44s)  kubelet          Node old-k8s-version-725426 status is now: NodeHasSufficientPID
	  Normal  Starting                 38s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  38s                kubelet          Node old-k8s-version-725426 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s                kubelet          Node old-k8s-version-725426 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s                kubelet          Node old-k8s-version-725426 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node old-k8s-version-725426 event: Registered Node old-k8s-version-725426 in Controller
	  Normal  NodeReady                13s                kubelet          Node old-k8s-version-725426 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000018] ll header: 00000000: 42 ae 2b 34 45 8c c6 70 9d 75 0f 8b 08 00
	[Dec10 06:11] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ca e1 45 1e 59 dc 08 06
	[Dec10 06:12] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e ac 6a 3a 10 14 08 06
	[  +0.000389] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e1 45 1e 59 dc 08 06
	[ +12.231886] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff aa b6 c3 b5 b8 e1 08 06
	[  +0.018522] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff f6 5b 96 ba 91 6c 08 06
	[Dec10 06:13] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 91 ba 36 9a ae 08 06
	[  +0.002987] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 7f a1 c5 f7 73 08 06
	[  +1.205570] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 04 b2 ab d7 49 08 06
	[  +4.623767] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 10 2d 23 5f e6 08 06
	[  +0.000315] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 5b 96 ba 91 6c 08 06
	[ +12.537493] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 fa d0 2a 46 66 08 06
	[  +0.000395] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 16 04 b2 ab d7 49 08 06
	
	
	==> etcd [2ca095f2204e0dae3c265418656e213c1afbd935bfdd70ea7991cc8f1eb8ffb6] <==
	{"level":"info","ts":"2025-12-10T06:12:48.482717Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-10T06:12:48.482785Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-12-10T06:12:48.484793Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-10T06:12:48.484852Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-10T06:12:48.485031Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-10T06:12:48.485303Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-10T06:12:48.485358Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-10T06:12:48.565206Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-10T06:12:48.565256Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-10T06:12:48.565274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-12-10T06:12:48.565288Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-12-10T06:12:48.565297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-10T06:12:48.565318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-12-10T06:12:48.565329Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-10T06:12:48.566767Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-10T06:12:48.567719Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-725426 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-10T06:12:48.568288Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-10T06:12:48.568825Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-10T06:12:48.568896Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-10T06:12:48.568336Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-10T06:12:48.56847Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-10T06:12:48.56936Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-10T06:12:48.569995Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-10T06:12:48.571125Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-10T06:12:48.580985Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 06:13:31 up 55 min,  0 user,  load average: 3.85, 3.83, 2.49
	Linux old-k8s-version-725426 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d434f05b928b514738d277765792399404f1905b4926e065cd25a3cd1976ae0f] <==
	I1210 06:13:08.303050       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 06:13:08.303341       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1210 06:13:08.303481       1 main.go:148] setting mtu 1500 for CNI 
	I1210 06:13:08.303500       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 06:13:08.303535       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T06:13:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 06:13:08.506208       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 06:13:08.506250       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 06:13:08.506262       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 06:13:08.506391       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 06:13:08.900456       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 06:13:08.900487       1 metrics.go:72] Registering metrics
	I1210 06:13:08.900557       1 controller.go:711] "Syncing nftables rules"
	I1210 06:13:18.512173       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 06:13:18.512212       1 main.go:301] handling current node
	I1210 06:13:28.509146       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 06:13:28.509177       1 main.go:301] handling current node
	
	
	==> kube-apiserver [236ba40d0f857861152d75a56c6d688d49d976abfe8e8163185e4e2e9c21898e] <==
	I1210 06:12:50.396832       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1210 06:12:50.396817       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1210 06:12:50.396849       1 aggregator.go:166] initial CRD sync complete...
	I1210 06:12:50.396855       1 autoregister_controller.go:141] Starting autoregister controller
	I1210 06:12:50.396859       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 06:12:50.396864       1 cache.go:39] Caches are synced for autoregister controller
	I1210 06:12:50.397170       1 shared_informer.go:318] Caches are synced for configmaps
	I1210 06:12:50.398433       1 controller.go:624] quota admission added evaluator for: namespaces
	E1210 06:12:50.403199       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1210 06:12:50.608177       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:12:51.302599       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1210 06:12:51.306928       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1210 06:12:51.306946       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 06:12:51.777828       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:12:51.815629       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:12:51.910853       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1210 06:12:51.916288       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1210 06:12:51.917388       1 controller.go:624] quota admission added evaluator for: endpoints
	I1210 06:12:51.924336       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:12:52.355137       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1210 06:12:53.198226       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1210 06:12:53.216557       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1210 06:12:53.225177       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1210 06:13:05.957411       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1210 06:13:06.054960       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [dc3fc552197d8d868e1e161ffe1c4bd5672d21a160d765c065c2f362370abc6a] <==
	I1210 06:13:05.321010       1 shared_informer.go:318] Caches are synced for attach detach
	I1210 06:13:05.349790       1 shared_informer.go:318] Caches are synced for endpoint
	I1210 06:13:05.708818       1 shared_informer.go:318] Caches are synced for garbage collector
	I1210 06:13:05.708849       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1210 06:13:05.725126       1 shared_informer.go:318] Caches are synced for garbage collector
	I1210 06:13:05.964849       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-m59j8"
	I1210 06:13:05.966320       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-5zsjn"
	I1210 06:13:06.057712       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1210 06:13:06.208060       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-vxb6d"
	I1210 06:13:06.212526       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-kds28"
	I1210 06:13:06.218453       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="160.715982ms"
	I1210 06:13:06.223218       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="4.713273ms"
	I1210 06:13:06.223327       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.776µs"
	I1210 06:13:06.224859       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.509µs"
	I1210 06:13:06.688650       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1210 06:13:06.696563       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-kds28"
	I1210 06:13:06.703427       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.974257ms"
	I1210 06:13:06.715503       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.019967ms"
	I1210 06:13:06.715604       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.243µs"
	I1210 06:13:18.834071       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="118.739µs"
	I1210 06:13:18.849846       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="77.564µs"
	I1210 06:13:19.353888       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="120.836µs"
	I1210 06:13:20.303545       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1210 06:13:20.372912       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.363075ms"
	I1210 06:13:20.373263       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="102.156µs"
	
	
	==> kube-proxy [cdd161475197d06c5bda3d1eaffc88751157de31b05fc205444aafeeaf5947e3] <==
	I1210 06:13:06.403633       1 server_others.go:69] "Using iptables proxy"
	I1210 06:13:06.414020       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1210 06:13:06.438405       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:13:06.441597       1 server_others.go:152] "Using iptables Proxier"
	I1210 06:13:06.441638       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1210 06:13:06.441649       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1210 06:13:06.441706       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1210 06:13:06.442017       1 server.go:846] "Version info" version="v1.28.0"
	I1210 06:13:06.442064       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:13:06.442951       1 config.go:315] "Starting node config controller"
	I1210 06:13:06.442971       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1210 06:13:06.443319       1 config.go:188] "Starting service config controller"
	I1210 06:13:06.443338       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1210 06:13:06.443363       1 config.go:97] "Starting endpoint slice config controller"
	I1210 06:13:06.443368       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1210 06:13:06.543144       1 shared_informer.go:318] Caches are synced for node config
	I1210 06:13:06.543698       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1210 06:13:06.543714       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [6f46f746227e802183040ecbd7e35dfe5c78cb059c34e0b1ed47ffb7deab3f3b] <==
	W1210 06:12:50.366597       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1210 06:12:50.366620       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1210 06:12:50.366633       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1210 06:12:50.366657       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1210 06:12:50.367467       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1210 06:12:50.367549       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1210 06:12:50.367474       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1210 06:12:50.367577       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1210 06:12:51.191915       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1210 06:12:51.191950       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1210 06:12:51.258763       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1210 06:12:51.258792       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1210 06:12:51.363650       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1210 06:12:51.363792       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1210 06:12:51.453170       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1210 06:12:51.453209       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1210 06:12:51.496200       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1210 06:12:51.496239       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1210 06:12:51.543188       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1210 06:12:51.543292       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1210 06:12:51.631113       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1210 06:12:51.631159       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1210 06:12:51.631357       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1210 06:12:51.631401       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1210 06:12:54.760623       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 10 06:13:05 old-k8s-version-725426 kubelet[1380]: I1210 06:13:05.221689    1380 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 10 06:13:05 old-k8s-version-725426 kubelet[1380]: I1210 06:13:05.970200    1380 topology_manager.go:215] "Topology Admit Handler" podUID="56434dce-a88f-481b-826f-9bdec83160cc" podNamespace="kube-system" podName="kube-proxy-m59j8"
	Dec 10 06:13:05 old-k8s-version-725426 kubelet[1380]: I1210 06:13:05.974348    1380 topology_manager.go:215] "Topology Admit Handler" podUID="78a3ed50-3d4f-4b58-955f-1cf071602aae" podNamespace="kube-system" podName="kindnet-5zsjn"
	Dec 10 06:13:06 old-k8s-version-725426 kubelet[1380]: I1210 06:13:06.040906    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lxqz\" (UniqueName: \"kubernetes.io/projected/78a3ed50-3d4f-4b58-955f-1cf071602aae-kube-api-access-4lxqz\") pod \"kindnet-5zsjn\" (UID: \"78a3ed50-3d4f-4b58-955f-1cf071602aae\") " pod="kube-system/kindnet-5zsjn"
	Dec 10 06:13:06 old-k8s-version-725426 kubelet[1380]: I1210 06:13:06.040947    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/56434dce-a88f-481b-826f-9bdec83160cc-kube-proxy\") pod \"kube-proxy-m59j8\" (UID: \"56434dce-a88f-481b-826f-9bdec83160cc\") " pod="kube-system/kube-proxy-m59j8"
	Dec 10 06:13:06 old-k8s-version-725426 kubelet[1380]: I1210 06:13:06.040967    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5942h\" (UniqueName: \"kubernetes.io/projected/56434dce-a88f-481b-826f-9bdec83160cc-kube-api-access-5942h\") pod \"kube-proxy-m59j8\" (UID: \"56434dce-a88f-481b-826f-9bdec83160cc\") " pod="kube-system/kube-proxy-m59j8"
	Dec 10 06:13:06 old-k8s-version-725426 kubelet[1380]: I1210 06:13:06.041002    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/78a3ed50-3d4f-4b58-955f-1cf071602aae-cni-cfg\") pod \"kindnet-5zsjn\" (UID: \"78a3ed50-3d4f-4b58-955f-1cf071602aae\") " pod="kube-system/kindnet-5zsjn"
	Dec 10 06:13:06 old-k8s-version-725426 kubelet[1380]: I1210 06:13:06.041032    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56434dce-a88f-481b-826f-9bdec83160cc-xtables-lock\") pod \"kube-proxy-m59j8\" (UID: \"56434dce-a88f-481b-826f-9bdec83160cc\") " pod="kube-system/kube-proxy-m59j8"
	Dec 10 06:13:06 old-k8s-version-725426 kubelet[1380]: I1210 06:13:06.041062    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56434dce-a88f-481b-826f-9bdec83160cc-lib-modules\") pod \"kube-proxy-m59j8\" (UID: \"56434dce-a88f-481b-826f-9bdec83160cc\") " pod="kube-system/kube-proxy-m59j8"
	Dec 10 06:13:06 old-k8s-version-725426 kubelet[1380]: I1210 06:13:06.041117    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78a3ed50-3d4f-4b58-955f-1cf071602aae-xtables-lock\") pod \"kindnet-5zsjn\" (UID: \"78a3ed50-3d4f-4b58-955f-1cf071602aae\") " pod="kube-system/kindnet-5zsjn"
	Dec 10 06:13:06 old-k8s-version-725426 kubelet[1380]: I1210 06:13:06.041180    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78a3ed50-3d4f-4b58-955f-1cf071602aae-lib-modules\") pod \"kindnet-5zsjn\" (UID: \"78a3ed50-3d4f-4b58-955f-1cf071602aae\") " pod="kube-system/kindnet-5zsjn"
	Dec 10 06:13:07 old-k8s-version-725426 kubelet[1380]: I1210 06:13:07.322624    1380 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-m59j8" podStartSLOduration=2.322564538 podCreationTimestamp="2025-12-10 06:13:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:13:07.321986705 +0000 UTC m=+14.157809648" watchObservedRunningTime="2025-12-10 06:13:07.322564538 +0000 UTC m=+14.158387468"
	Dec 10 06:13:08 old-k8s-version-725426 kubelet[1380]: I1210 06:13:08.320022    1380 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-5zsjn" podStartSLOduration=1.498614321 podCreationTimestamp="2025-12-10 06:13:05 +0000 UTC" firstStartedPulling="2025-12-10 06:13:06.28643509 +0000 UTC m=+13.122258012" lastFinishedPulling="2025-12-10 06:13:08.107789512 +0000 UTC m=+14.943612427" observedRunningTime="2025-12-10 06:13:08.319964479 +0000 UTC m=+15.155787409" watchObservedRunningTime="2025-12-10 06:13:08.319968736 +0000 UTC m=+15.155791671"
	Dec 10 06:13:18 old-k8s-version-725426 kubelet[1380]: I1210 06:13:18.812109    1380 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 10 06:13:18 old-k8s-version-725426 kubelet[1380]: I1210 06:13:18.832621    1380 topology_manager.go:215] "Topology Admit Handler" podUID="5236226c-f0ed-4cdb-a1b9-7b8be18cbbca" podNamespace="kube-system" podName="storage-provisioner"
	Dec 10 06:13:18 old-k8s-version-725426 kubelet[1380]: I1210 06:13:18.833769    1380 topology_manager.go:215] "Topology Admit Handler" podUID="35114c39-50c7-4638-8d8f-54fac4c36ff2" podNamespace="kube-system" podName="coredns-5dd5756b68-vxb6d"
	Dec 10 06:13:18 old-k8s-version-725426 kubelet[1380]: I1210 06:13:18.937215    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckxjb\" (UniqueName: \"kubernetes.io/projected/5236226c-f0ed-4cdb-a1b9-7b8be18cbbca-kube-api-access-ckxjb\") pod \"storage-provisioner\" (UID: \"5236226c-f0ed-4cdb-a1b9-7b8be18cbbca\") " pod="kube-system/storage-provisioner"
	Dec 10 06:13:18 old-k8s-version-725426 kubelet[1380]: I1210 06:13:18.937278    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5236226c-f0ed-4cdb-a1b9-7b8be18cbbca-tmp\") pod \"storage-provisioner\" (UID: \"5236226c-f0ed-4cdb-a1b9-7b8be18cbbca\") " pod="kube-system/storage-provisioner"
	Dec 10 06:13:18 old-k8s-version-725426 kubelet[1380]: I1210 06:13:18.937316    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/35114c39-50c7-4638-8d8f-54fac4c36ff2-config-volume\") pod \"coredns-5dd5756b68-vxb6d\" (UID: \"35114c39-50c7-4638-8d8f-54fac4c36ff2\") " pod="kube-system/coredns-5dd5756b68-vxb6d"
	Dec 10 06:13:18 old-k8s-version-725426 kubelet[1380]: I1210 06:13:18.937353    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvmll\" (UniqueName: \"kubernetes.io/projected/35114c39-50c7-4638-8d8f-54fac4c36ff2-kube-api-access-mvmll\") pod \"coredns-5dd5756b68-vxb6d\" (UID: \"35114c39-50c7-4638-8d8f-54fac4c36ff2\") " pod="kube-system/coredns-5dd5756b68-vxb6d"
	Dec 10 06:13:19 old-k8s-version-725426 kubelet[1380]: I1210 06:13:19.344471    1380 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.344427684 podCreationTimestamp="2025-12-10 06:13:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:13:19.344374872 +0000 UTC m=+26.180197804" watchObservedRunningTime="2025-12-10 06:13:19.344427684 +0000 UTC m=+26.180250614"
	Dec 10 06:13:20 old-k8s-version-725426 kubelet[1380]: I1210 06:13:20.353182    1380 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-vxb6d" podStartSLOduration=14.353132566 podCreationTimestamp="2025-12-10 06:13:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:13:19.353816745 +0000 UTC m=+26.189639676" watchObservedRunningTime="2025-12-10 06:13:20.353132566 +0000 UTC m=+27.188955495"
	Dec 10 06:13:22 old-k8s-version-725426 kubelet[1380]: I1210 06:13:22.034788    1380 topology_manager.go:215] "Topology Admit Handler" podUID="afc89bbc-2505-4919-a0eb-647322d563cc" podNamespace="default" podName="busybox"
	Dec 10 06:13:22 old-k8s-version-725426 kubelet[1380]: I1210 06:13:22.056994    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28j5c\" (UniqueName: \"kubernetes.io/projected/afc89bbc-2505-4919-a0eb-647322d563cc-kube-api-access-28j5c\") pod \"busybox\" (UID: \"afc89bbc-2505-4919-a0eb-647322d563cc\") " pod="default/busybox"
	Dec 10 06:13:23 old-k8s-version-725426 kubelet[1380]: I1210 06:13:23.356338    1380 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.752683077 podCreationTimestamp="2025-12-10 06:13:22 +0000 UTC" firstStartedPulling="2025-12-10 06:13:22.356377421 +0000 UTC m=+29.192200333" lastFinishedPulling="2025-12-10 06:13:22.959988401 +0000 UTC m=+29.795811316" observedRunningTime="2025-12-10 06:13:23.356156307 +0000 UTC m=+30.191979248" watchObservedRunningTime="2025-12-10 06:13:23.35629406 +0000 UTC m=+30.192116990"
	
	
	==> storage-provisioner [0e444bed40a4b610ed805d68ed78fa2087c7444c25e4abfe23e0c804049499d9] <==
	I1210 06:13:19.192238       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 06:13:19.203844       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 06:13:19.203894       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1210 06:13:19.213456       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 06:13:19.213585       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c2524e9c-7625-4e55-9d2f-d2c7b14c23d5", APIVersion:"v1", ResourceVersion:"433", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-725426_2bc57812-b4f5-4679-bba0-975fe08f3377 became leader
	I1210 06:13:19.213714       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-725426_2bc57812-b4f5-4679-bba0-975fe08f3377!
	I1210 06:13:19.314169       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-725426_2bc57812-b4f5-4679-bba0-975fe08f3377!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-725426 -n old-k8s-version-725426
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-725426 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-468539 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-468539 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (270.447537ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:14:20Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-468539 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-468539 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-468539 describe deploy/metrics-server -n kube-system: exit status 1 (81.471046ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-468539 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-468539
helpers_test.go:244: (dbg) docker inspect no-preload-468539:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6169612bc56bab93835939c53ac02a13c50da032ef0d09bba72271c5ab86dd4f",
	        "Created": "2025-12-10T06:13:26.609062695Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 358838,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:13:26.649070143Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/6169612bc56bab93835939c53ac02a13c50da032ef0d09bba72271c5ab86dd4f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6169612bc56bab93835939c53ac02a13c50da032ef0d09bba72271c5ab86dd4f/hostname",
	        "HostsPath": "/var/lib/docker/containers/6169612bc56bab93835939c53ac02a13c50da032ef0d09bba72271c5ab86dd4f/hosts",
	        "LogPath": "/var/lib/docker/containers/6169612bc56bab93835939c53ac02a13c50da032ef0d09bba72271c5ab86dd4f/6169612bc56bab93835939c53ac02a13c50da032ef0d09bba72271c5ab86dd4f-json.log",
	        "Name": "/no-preload-468539",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-468539:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-468539",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6169612bc56bab93835939c53ac02a13c50da032ef0d09bba72271c5ab86dd4f",
	                "LowerDir": "/var/lib/docker/overlay2/461fe4a5d9f098045f9eeb90a0afe8d126d8e281aa5837713c6a0ead57ebe0bd-init/diff:/var/lib/docker/overlay2/b62e2f8db4877fd6b32453256d2aeab173581bfdfbed6c87a5c3b6dd49dbb983/diff",
	                "MergedDir": "/var/lib/docker/overlay2/461fe4a5d9f098045f9eeb90a0afe8d126d8e281aa5837713c6a0ead57ebe0bd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/461fe4a5d9f098045f9eeb90a0afe8d126d8e281aa5837713c6a0ead57ebe0bd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/461fe4a5d9f098045f9eeb90a0afe8d126d8e281aa5837713c6a0ead57ebe0bd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-468539",
	                "Source": "/var/lib/docker/volumes/no-preload-468539/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-468539",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-468539",
	                "name.minikube.sigs.k8s.io": "no-preload-468539",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f0d3f0a525a4e2486ab427dbca96a50d82ec20d9db7d0dfa2979fdd4b512200e",
	            "SandboxKey": "/var/run/docker/netns/f0d3f0a525a4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-468539": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8043b90263214f4b2e6a8501c7af598190f163277d9c059bfe96da303e39ab18",
	                    "EndpointID": "fc344c8650655042b3d7e750cbcc5df348163d3827e0b8209b0648fe1eb442e6",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "a2:d7:3a:34:ef:77",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-468539",
	                        "6169612bc56b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-468539 -n no-preload-468539
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-468539 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-468539 logs -n 25: (1.183932631s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-094798 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                    │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                   │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo cat /var/lib/kubelet/config.yaml                                                                                                                   │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo systemctl status docker --all --full --no-pager                                                                                                    │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ ssh     │ -p bridge-094798 sudo systemctl cat docker --no-pager                                                                                                                    │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ ssh     │ -p bridge-094798 sudo docker system info                                                                                                                                 │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ ssh     │ -p bridge-094798 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ ssh     │ -p bridge-094798 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ ssh     │ -p bridge-094798 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo cri-dockerd --version                                                                                                                              │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ ssh     │ -p bridge-094798 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo containerd config dump                                                                                                                             │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo crio config                                                                                                                                        │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ delete  │ -p bridge-094798                                                                                                                                                         │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ delete  │ -p disable-driver-mounts-569732                                                                                                                                          │ disable-driver-mounts-569732 │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ start   │ -p default-k8s-diff-port-125336 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3 │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-468539 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:14:11
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:14:11.651260  377144 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:14:11.651548  377144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:14:11.651557  377144 out.go:374] Setting ErrFile to fd 2...
	I1210 06:14:11.651565  377144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:14:11.651834  377144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 06:14:11.652351  377144 out.go:368] Setting JSON to false
	I1210 06:14:11.653721  377144 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3396,"bootTime":1765343856,"procs":450,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:14:11.653790  377144 start.go:143] virtualization: kvm guest
	I1210 06:14:11.655560  377144 out.go:179] * [default-k8s-diff-port-125336] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:14:11.657730  377144 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:14:11.657730  377144 notify.go:221] Checking for updates...
	I1210 06:14:11.659887  377144 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:14:11.660913  377144 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:14:11.661987  377144 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube
	I1210 06:14:11.662969  377144 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:14:11.663981  377144 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:14:11.665492  377144 config.go:182] Loaded profile config "embed-certs-028500": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:14:11.665579  377144 config.go:182] Loaded profile config "no-preload-468539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:14:11.665647  377144 config.go:182] Loaded profile config "old-k8s-version-725426": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1210 06:14:11.665729  377144 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:14:11.688895  377144 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:14:11.688997  377144 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:14:11.747939  377144 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-10 06:14:11.738154514 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:14:11.748074  377144 docker.go:319] overlay module found
	I1210 06:14:11.749759  377144 out.go:179] * Using the docker driver based on user configuration
	I1210 06:14:11.750795  377144 start.go:309] selected driver: docker
	I1210 06:14:11.750813  377144 start.go:927] validating driver "docker" against <nil>
	I1210 06:14:11.750828  377144 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:14:11.751584  377144 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:14:11.809247  377144 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-10 06:14:11.799219659 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:14:11.809429  377144 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 06:14:11.809688  377144 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:14:11.811338  377144 out.go:179] * Using Docker driver with root privileges
	I1210 06:14:11.812431  377144 cni.go:84] Creating CNI manager for ""
	I1210 06:14:11.812489  377144 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:14:11.812499  377144 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 06:14:11.812545  377144 start.go:353] cluster config:
	{Name:default-k8s-diff-port-125336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:14:11.813569  377144 out.go:179] * Starting "default-k8s-diff-port-125336" primary control-plane node in "default-k8s-diff-port-125336" cluster
	I1210 06:14:11.814639  377144 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:14:11.815633  377144 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:14:11.816575  377144 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 06:14:11.816671  377144 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 06:14:11.836470  377144 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:14:11.836486  377144 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 06:14:11.846429  377144 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1210 06:14:11.928669  377144 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1210 06:14:11.928793  377144 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/config.json ...
	I1210 06:14:11.928821  377144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/config.json: {Name:mkf8b351fe32c3f192619433d4ef62158eb42523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:14:11.928972  377144 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:14:11.928991  377144 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:14:11.929016  377144 start.go:360] acquireMachinesLock for default-k8s-diff-port-125336: {Name:mk1b9a5beba896eecc2201d27beab95b8159d676 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:11.929072  377144 start.go:364] duration metric: took 41.309µs to acquireMachinesLock for "default-k8s-diff-port-125336"
	I1210 06:14:11.929113  377144 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-125336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:14:11.929179  377144 start.go:125] createHost starting for "" (driver="docker")
	I1210 06:14:10.842152  358054 pod_ready.go:94] pod "kube-controller-manager-no-preload-468539" is "Ready"
	I1210 06:14:10.842177  358054 pod_ready.go:86] duration metric: took 311.35596ms for pod "kube-controller-manager-no-preload-468539" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:14:11.043049  358054 pod_ready.go:83] waiting for pod "kube-proxy-ngf5r" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:14:11.442593  358054 pod_ready.go:94] pod "kube-proxy-ngf5r" is "Ready"
	I1210 06:14:11.442623  358054 pod_ready.go:86] duration metric: took 399.547178ms for pod "kube-proxy-ngf5r" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:14:11.643098  358054 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-468539" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:14:12.041931  358054 pod_ready.go:94] pod "kube-scheduler-no-preload-468539" is "Ready"
	I1210 06:14:12.041965  358054 pod_ready.go:86] duration metric: took 398.845942ms for pod "kube-scheduler-no-preload-468539" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:14:12.041980  358054 pod_ready.go:40] duration metric: took 1.604187472s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:14:12.094375  358054 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-rc.1 (minor skew: 1)
	I1210 06:14:12.095957  358054 out.go:179] * Done! kubectl is now configured to use "no-preload-468539" cluster and "default" namespace by default
	W1210 06:14:11.061582  369109 pod_ready.go:104] pod "coredns-5dd5756b68-vxb6d" is not "Ready", error: <nil>
	W1210 06:14:13.561647  369109 pod_ready.go:104] pod "coredns-5dd5756b68-vxb6d" is not "Ready", error: <nil>
	I1210 06:14:11.710186  366268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:12.210063  366268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:12.710331  366268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:13.209987  366268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:13.710522  366268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:14.209629  366268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:14.710541  366268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:15.210339  366268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:15.297604  366268 kubeadm.go:1114] duration metric: took 4.176354002s to wait for elevateKubeSystemPrivileges
	I1210 06:14:15.297647  366268 kubeadm.go:403] duration metric: took 14.9119621s to StartCluster
	I1210 06:14:15.297670  366268 settings.go:142] acquiring lock: {Name:mk8c38e27b37253ca8cb2a2adf6342f0db270902 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:14:15.297739  366268 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:14:15.299910  366268 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/kubeconfig: {Name:mkfa60e97179780abb80cddf99075aed36301884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:14:15.300188  366268 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:14:15.300309  366268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 06:14:15.300344  366268 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:14:15.300459  366268 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-028500"
	I1210 06:14:15.300478  366268 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-028500"
	I1210 06:14:15.300484  366268 addons.go:70] Setting default-storageclass=true in profile "embed-certs-028500"
	I1210 06:14:15.300505  366268 config.go:182] Loaded profile config "embed-certs-028500": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:14:15.300510  366268 host.go:66] Checking if "embed-certs-028500" exists ...
	I1210 06:14:15.300510  366268 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-028500"
	I1210 06:14:15.300952  366268 cli_runner.go:164] Run: docker container inspect embed-certs-028500 --format={{.State.Status}}
	I1210 06:14:15.301167  366268 cli_runner.go:164] Run: docker container inspect embed-certs-028500 --format={{.State.Status}}
	I1210 06:14:15.303727  366268 out.go:179] * Verifying Kubernetes components...
	I1210 06:14:15.305179  366268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:14:15.330239  366268 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:14:15.331558  366268 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:14:15.331577  366268 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:14:15.331647  366268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-028500
	I1210 06:14:15.331991  366268 addons.go:239] Setting addon default-storageclass=true in "embed-certs-028500"
	I1210 06:14:15.332037  366268 host.go:66] Checking if "embed-certs-028500" exists ...
	I1210 06:14:15.332509  366268 cli_runner.go:164] Run: docker container inspect embed-certs-028500 --format={{.State.Status}}
	I1210 06:14:15.362720  366268 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:14:15.362744  366268 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:14:15.362788  366268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/embed-certs-028500/id_rsa Username:docker}
	I1210 06:14:15.362810  366268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-028500
	I1210 06:14:15.388163  366268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/embed-certs-028500/id_rsa Username:docker}
	I1210 06:14:15.413733  366268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 06:14:15.476641  366268 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:14:15.485945  366268 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:14:15.511014  366268 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:14:15.624137  366268 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1210 06:14:15.626936  366268 node_ready.go:35] waiting up to 6m0s for node "embed-certs-028500" to be "Ready" ...
	I1210 06:14:15.878101  366268 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1210 06:14:15.879210  366268 addons.go:530] duration metric: took 578.878043ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1210 06:14:16.130955  366268 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-028500" context rescaled to 1 replicas
	I1210 06:14:11.930913  377144 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 06:14:11.931120  377144 start.go:159] libmachine.API.Create for "default-k8s-diff-port-125336" (driver="docker")
	I1210 06:14:11.931144  377144 client.go:173] LocalClient.Create starting
	I1210 06:14:11.931187  377144 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem
	I1210 06:14:11.931212  377144 main.go:143] libmachine: Decoding PEM data...
	I1210 06:14:11.931227  377144 main.go:143] libmachine: Parsing certificate...
	I1210 06:14:11.931283  377144 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem
	I1210 06:14:11.931301  377144 main.go:143] libmachine: Decoding PEM data...
	I1210 06:14:11.931310  377144 main.go:143] libmachine: Parsing certificate...
	I1210 06:14:11.931623  377144 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-125336 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 06:14:11.949312  377144 cli_runner.go:211] docker network inspect default-k8s-diff-port-125336 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 06:14:11.949370  377144 network_create.go:284] running [docker network inspect default-k8s-diff-port-125336] to gather additional debugging logs...
	I1210 06:14:11.949385  377144 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-125336
	W1210 06:14:11.967106  377144 cli_runner.go:211] docker network inspect default-k8s-diff-port-125336 returned with exit code 1
	I1210 06:14:11.967139  377144 network_create.go:287] error running [docker network inspect default-k8s-diff-port-125336]: docker network inspect default-k8s-diff-port-125336: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-125336 not found
	I1210 06:14:11.967150  377144 network_create.go:289] output of [docker network inspect default-k8s-diff-port-125336]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-125336 not found
	
	** /stderr **
	I1210 06:14:11.967288  377144 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:14:11.985789  377144 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9ebf62c95cf7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d2:a8:ac:6e:16:1a} reservation:<nil>}
	I1210 06:14:11.986604  377144 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ad22705e186e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:52:8a:92:75:2c:7b} reservation:<nil>}
	I1210 06:14:11.987371  377144 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-782a6994f202 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3e:35:84:e8:81:18} reservation:<nil>}
	I1210 06:14:11.987894  377144 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b1ead66c643d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:fe:c5:90:28:3d:ff} reservation:<nil>}
	I1210 06:14:11.988552  377144 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-b8125d4cfb05 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:b2:71:f2:00:8c:13} reservation:<nil>}
	I1210 06:14:11.989161  377144 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-8043b9026321 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:82:a4:a7:52:6e:bc} reservation:<nil>}
	I1210 06:14:11.990015  377144 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001feb7f0}
	I1210 06:14:11.990041  377144 network_create.go:124] attempt to create docker network default-k8s-diff-port-125336 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1210 06:14:11.990122  377144 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-125336 default-k8s-diff-port-125336
	I1210 06:14:12.040138  377144 network_create.go:108] docker network default-k8s-diff-port-125336 192.168.103.0/24 created
	I1210 06:14:12.040166  377144 kic.go:121] calculated static IP "192.168.103.2" for the "default-k8s-diff-port-125336" container
	I1210 06:14:12.040232  377144 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 06:14:12.061114  377144 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-125336 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-125336 --label created_by.minikube.sigs.k8s.io=true
	I1210 06:14:12.066471  377144 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:14:12.082233  377144 oci.go:103] Successfully created a docker volume default-k8s-diff-port-125336
	I1210 06:14:12.082309  377144 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-125336-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-125336 --entrypoint /usr/bin/test -v default-k8s-diff-port-125336:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 06:14:12.221712  377144 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:14:12.386524  377144 cache.go:107] acquiring lock: {Name:mkc3a95f67321b2fa8faeb966829fb60cf65d25d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:12.386562  377144 cache.go:107] acquiring lock: {Name:mkdd768341d1a3481ecaec697219b32d4a715834 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:12.386572  377144 cache.go:107] acquiring lock: {Name:mkcb073544c2d92de0e0765e38c37b4f4d2ac46b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:12.386568  377144 cache.go:107] acquiring lock: {Name:mk4839690ba979036496a7cee1de2814aaad3bf0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:12.386531  377144 cache.go:107] acquiring lock: {Name:mk4d792f4bac33dc8779d7cc5ff40393c94e0ea4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:12.386525  377144 cache.go:107] acquiring lock: {Name:mk0763a50664c56b0862900e71862307cba94d4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:12.386692  377144 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 exists
	I1210 06:14:12.386704  377144 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1210 06:14:12.386664  377144 cache.go:107] acquiring lock: {Name:mkd670cede0997c7eb0e9bd388a82e1cb2741031 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:12.386723  377144 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 160.066µs
	I1210 06:14:12.386735  377144 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1210 06:14:12.386744  377144 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:14:12.386763  377144 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 260.42µs
	I1210 06:14:12.386773  377144 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:14:12.386705  377144 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3" took 184.524µs
	I1210 06:14:12.386790  377144 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 succeeded
	I1210 06:14:12.386788  377144 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 exists
	I1210 06:14:12.386675  377144 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 exists
	I1210 06:14:12.386795  377144 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 exists
	I1210 06:14:12.386804  377144 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3" took 246.574µs
	I1210 06:14:12.386808  377144 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3" took 231.72µs
	I1210 06:14:12.386812  377144 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3" took 306.425µs
	I1210 06:14:12.386818  377144 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 succeeded
	I1210 06:14:12.386778  377144 cache.go:107] acquiring lock: {Name:mk796942baeaa838a47daad2be5ca7532234da42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:12.386823  377144 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 succeeded
	I1210 06:14:12.386827  377144 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 succeeded
	I1210 06:14:12.386810  377144 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:14:12.386876  377144 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 246.731µs
	I1210 06:14:12.386894  377144 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:14:12.386858  377144 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1210 06:14:12.386905  377144 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 153.287µs
	I1210 06:14:12.386917  377144 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1210 06:14:12.386924  377144 cache.go:87] Successfully saved all images to host disk.
	I1210 06:14:12.513833  377144 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-125336
	I1210 06:14:12.513914  377144 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	W1210 06:14:12.514094  377144 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1210 06:14:12.514139  377144 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1210 06:14:12.514194  377144 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 06:14:12.585914  377144 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-125336 --name default-k8s-diff-port-125336 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-125336 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-125336 --network default-k8s-diff-port-125336 --ip 192.168.103.2 --volume default-k8s-diff-port-125336:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 06:14:12.888807  377144 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Running}}
	I1210 06:14:12.910122  377144 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:14:12.929981  377144 cli_runner.go:164] Run: docker exec default-k8s-diff-port-125336 stat /var/lib/dpkg/alternatives/iptables
	I1210 06:14:12.982113  377144 oci.go:144] the created container "default-k8s-diff-port-125336" has a running status.
	I1210 06:14:12.982150  377144 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa...
	I1210 06:14:13.034115  377144 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 06:14:13.062572  377144 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:14:13.084243  377144 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 06:14:13.084271  377144 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-125336 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 06:14:13.136379  377144 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:14:13.162867  377144 machine.go:94] provisionDockerMachine start ...
	I1210 06:14:13.162975  377144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:14:13.187567  377144 main.go:143] libmachine: Using SSH client type: native
	I1210 06:14:13.187909  377144 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1210 06:14:13.187937  377144 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:14:13.188677  377144 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53622->127.0.0.1:33113: read: connection reset by peer
	I1210 06:14:16.337950  377144 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-125336
	
	I1210 06:14:16.337978  377144 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-125336"
	I1210 06:14:16.338040  377144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:14:16.359688  377144 main.go:143] libmachine: Using SSH client type: native
	I1210 06:14:16.359996  377144 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1210 06:14:16.360021  377144 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-125336 && echo "default-k8s-diff-port-125336" | sudo tee /etc/hostname
	I1210 06:14:16.512022  377144 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-125336
	
	I1210 06:14:16.512118  377144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:14:16.537348  377144 main.go:143] libmachine: Using SSH client type: native
	I1210 06:14:16.537653  377144 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1210 06:14:16.537683  377144 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-125336' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-125336/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-125336' | sudo tee -a /etc/hosts; 
				fi
			fi
	W1210 06:14:15.566505  369109 pod_ready.go:104] pod "coredns-5dd5756b68-vxb6d" is not "Ready", error: <nil>
	W1210 06:14:18.062518  369109 pod_ready.go:104] pod "coredns-5dd5756b68-vxb6d" is not "Ready", error: <nil>
	I1210 06:14:16.696513  377144 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:14:16.696542  377144 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-5725/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-5725/.minikube}
	I1210 06:14:16.698419  377144 ubuntu.go:190] setting up certificates
	I1210 06:14:16.698440  377144 provision.go:84] configureAuth start
	I1210 06:14:16.698509  377144 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-125336
	I1210 06:14:16.724121  377144 provision.go:143] copyHostCerts
	I1210 06:14:16.724189  377144 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem, removing ...
	I1210 06:14:16.724199  377144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem
	I1210 06:14:16.724270  377144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem (1679 bytes)
	I1210 06:14:16.724396  377144 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem, removing ...
	I1210 06:14:16.724406  377144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem
	I1210 06:14:16.724449  377144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem (1078 bytes)
	I1210 06:14:16.724540  377144 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem, removing ...
	I1210 06:14:16.724546  377144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem
	I1210 06:14:16.724584  377144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem (1123 bytes)
	I1210 06:14:16.724664  377144 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-125336 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-125336 localhost minikube]
	I1210 06:14:16.768859  377144 provision.go:177] copyRemoteCerts
	I1210 06:14:16.768933  377144 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:14:16.768989  377144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:14:16.794694  377144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:14:16.909826  377144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:14:16.937019  377144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1210 06:14:16.963443  377144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:14:16.988991  377144 provision.go:87] duration metric: took 290.528609ms to configureAuth
	I1210 06:14:16.989022  377144 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:14:16.989233  377144 config.go:182] Loaded profile config "default-k8s-diff-port-125336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:14:16.989371  377144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:14:17.012694  377144 main.go:143] libmachine: Using SSH client type: native
	I1210 06:14:17.013181  377144 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1210 06:14:17.013233  377144 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:14:17.375999  377144 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:14:17.376169  377144 machine.go:97] duration metric: took 4.213275938s to provisionDockerMachine
	I1210 06:14:17.376199  377144 client.go:176] duration metric: took 5.445047641s to LocalClient.Create
	I1210 06:14:17.376245  377144 start.go:167] duration metric: took 5.4451166s to libmachine.API.Create "default-k8s-diff-port-125336"
	I1210 06:14:17.376259  377144 start.go:293] postStartSetup for "default-k8s-diff-port-125336" (driver="docker")
	I1210 06:14:17.376271  377144 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:14:17.376335  377144 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:14:17.376397  377144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:14:17.403347  377144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:14:17.516489  377144 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:14:17.521458  377144 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:14:17.521495  377144 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:14:17.521507  377144 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/addons for local assets ...
	I1210 06:14:17.521564  377144 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/files for local assets ...
	I1210 06:14:17.521689  377144 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem -> 92532.pem in /etc/ssl/certs
	I1210 06:14:17.521934  377144 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:14:17.533674  377144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:14:17.559335  377144 start.go:296] duration metric: took 183.063493ms for postStartSetup
	I1210 06:14:17.559741  377144 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-125336
	I1210 06:14:17.583240  377144 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/config.json ...
	I1210 06:14:17.583500  377144 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:14:17.583555  377144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:14:17.605976  377144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:14:17.708723  377144 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:14:17.715304  377144 start.go:128] duration metric: took 5.786110546s to createHost
	I1210 06:14:17.715331  377144 start.go:83] releasing machines lock for "default-k8s-diff-port-125336", held for 5.786226817s
	I1210 06:14:17.715406  377144 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-125336
	I1210 06:14:17.738408  377144 ssh_runner.go:195] Run: cat /version.json
	I1210 06:14:17.738471  377144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:14:17.738725  377144 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:14:17.738840  377144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:14:17.762731  377144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:14:17.764610  377144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:14:17.866930  377144 ssh_runner.go:195] Run: systemctl --version
	I1210 06:14:17.951907  377144 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:14:18.000720  377144 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:14:18.008012  377144 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:14:18.008101  377144 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:14:18.040030  377144 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 06:14:18.040059  377144 start.go:496] detecting cgroup driver to use...
	I1210 06:14:18.040109  377144 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:14:18.040160  377144 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:14:18.063732  377144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:14:18.079291  377144 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:14:18.079349  377144 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:14:18.102616  377144 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:14:18.125181  377144 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:14:18.239393  377144 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:14:18.371586  377144 docker.go:234] disabling docker service ...
	I1210 06:14:18.371649  377144 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:14:18.397030  377144 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:14:18.412819  377144 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:14:18.537461  377144 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:14:18.637223  377144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:14:18.649745  377144 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:14:18.664652  377144 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:14:18.795096  377144 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:14:18.795149  377144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:14:18.806688  377144 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:14:18.806738  377144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:14:18.815694  377144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:14:18.825095  377144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:14:18.835440  377144 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:14:18.844541  377144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:14:18.852993  377144 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:14:18.866450  377144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:14:18.875360  377144 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:14:18.882773  377144 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:14:18.890978  377144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:14:18.976579  377144 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:14:19.886556  377144 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:14:19.886633  377144 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:14:19.890738  377144 start.go:564] Will wait 60s for crictl version
	I1210 06:14:19.890803  377144 ssh_runner.go:195] Run: which crictl
	I1210 06:14:19.894176  377144 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:14:19.918653  377144 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:14:19.918732  377144 ssh_runner.go:195] Run: crio --version
	I1210 06:14:19.945335  377144 ssh_runner.go:195] Run: crio --version
	I1210 06:14:19.974121  377144 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	
	
	==> CRI-O <==
	Dec 10 06:14:09 no-preload-468539 crio[768]: time="2025-12-10T06:14:09.5878908Z" level=info msg="Starting container: 4327a126366fde241ede9b8e3edf202da507e51ad80d3a0ef32b901bc284a15f" id=3315f646-3ce0-4ced-92fa-75ec690ae807 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:14:09 no-preload-468539 crio[768]: time="2025-12-10T06:14:09.589774448Z" level=info msg="Started container" PID=2787 containerID=4327a126366fde241ede9b8e3edf202da507e51ad80d3a0ef32b901bc284a15f description=kube-system/coredns-7d764666f9-tnm7t/coredns id=3315f646-3ce0-4ced-92fa-75ec690ae807 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7c61903d6c2b8230ff1fbc6ac2d50f67cf1da855e8df0e5b9f5b79ac12cd39a3
	Dec 10 06:14:12 no-preload-468539 crio[768]: time="2025-12-10T06:14:12.580463175Z" level=info msg="Running pod sandbox: default/busybox/POD" id=17b788ef-4ab8-4213-bc6f-058f9e543aa7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:14:12 no-preload-468539 crio[768]: time="2025-12-10T06:14:12.580566803Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:14:12 no-preload-468539 crio[768]: time="2025-12-10T06:14:12.586338402Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:22d2cb0d9974984c8a1c05e22a9b9052efd15c08685d3805267ef5c1f9a5adf2 UID:bae410c4-fff9-404e-a09d-794d0f6bd59d NetNS:/var/run/netns/9974f78b-ff37-4c8b-ae98-d2d9e5696682 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000ca0c68}] Aliases:map[]}"
	Dec 10 06:14:12 no-preload-468539 crio[768]: time="2025-12-10T06:14:12.586381974Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 10 06:14:12 no-preload-468539 crio[768]: time="2025-12-10T06:14:12.597452623Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:22d2cb0d9974984c8a1c05e22a9b9052efd15c08685d3805267ef5c1f9a5adf2 UID:bae410c4-fff9-404e-a09d-794d0f6bd59d NetNS:/var/run/netns/9974f78b-ff37-4c8b-ae98-d2d9e5696682 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000ca0c68}] Aliases:map[]}"
	Dec 10 06:14:12 no-preload-468539 crio[768]: time="2025-12-10T06:14:12.597787786Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 10 06:14:12 no-preload-468539 crio[768]: time="2025-12-10T06:14:12.598673643Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 06:14:12 no-preload-468539 crio[768]: time="2025-12-10T06:14:12.600004461Z" level=info msg="Ran pod sandbox 22d2cb0d9974984c8a1c05e22a9b9052efd15c08685d3805267ef5c1f9a5adf2 with infra container: default/busybox/POD" id=17b788ef-4ab8-4213-bc6f-058f9e543aa7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:14:12 no-preload-468539 crio[768]: time="2025-12-10T06:14:12.601646057Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=117348e3-145e-4679-81d4-b53874792f63 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:14:12 no-preload-468539 crio[768]: time="2025-12-10T06:14:12.601782824Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=117348e3-145e-4679-81d4-b53874792f63 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:14:12 no-preload-468539 crio[768]: time="2025-12-10T06:14:12.601830518Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=117348e3-145e-4679-81d4-b53874792f63 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:14:12 no-preload-468539 crio[768]: time="2025-12-10T06:14:12.602927218Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f3e06f21-38f6-46fa-8020-287c856830c3 name=/runtime.v1.ImageService/PullImage
	Dec 10 06:14:12 no-preload-468539 crio[768]: time="2025-12-10T06:14:12.60483784Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 10 06:14:13 no-preload-468539 crio[768]: time="2025-12-10T06:14:13.268620184Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=f3e06f21-38f6-46fa-8020-287c856830c3 name=/runtime.v1.ImageService/PullImage
	Dec 10 06:14:13 no-preload-468539 crio[768]: time="2025-12-10T06:14:13.269492055Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e50a2c50-08d1-43f8-ae6c-1633eb656b85 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:14:13 no-preload-468539 crio[768]: time="2025-12-10T06:14:13.271485627Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=06548f35-6f8f-48dd-932d-2090cf448f7e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:14:13 no-preload-468539 crio[768]: time="2025-12-10T06:14:13.278432312Z" level=info msg="Creating container: default/busybox/busybox" id=47f0a42e-df07-48a2-96ce-34ab9baa616e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:14:13 no-preload-468539 crio[768]: time="2025-12-10T06:14:13.278571195Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:14:13 no-preload-468539 crio[768]: time="2025-12-10T06:14:13.283613963Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:14:13 no-preload-468539 crio[768]: time="2025-12-10T06:14:13.284241725Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:14:13 no-preload-468539 crio[768]: time="2025-12-10T06:14:13.303539409Z" level=info msg="Created container 39723bb0af01fc97da0679b0234648dd8898bf5c631aadbaac9e81b4c1ac6b5e: default/busybox/busybox" id=47f0a42e-df07-48a2-96ce-34ab9baa616e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:14:13 no-preload-468539 crio[768]: time="2025-12-10T06:14:13.304592341Z" level=info msg="Starting container: 39723bb0af01fc97da0679b0234648dd8898bf5c631aadbaac9e81b4c1ac6b5e" id=88360239-82ff-427b-9521-aeaffcbded76 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:14:13 no-preload-468539 crio[768]: time="2025-12-10T06:14:13.306249Z" level=info msg="Started container" PID=2857 containerID=39723bb0af01fc97da0679b0234648dd8898bf5c631aadbaac9e81b4c1ac6b5e description=default/busybox/busybox id=88360239-82ff-427b-9521-aeaffcbded76 name=/runtime.v1.RuntimeService/StartContainer sandboxID=22d2cb0d9974984c8a1c05e22a9b9052efd15c08685d3805267ef5c1f9a5adf2
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	39723bb0af01f       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   22d2cb0d99749       busybox                                     default
	4327a126366fd       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      12 seconds ago      Running             coredns                   0                   7c61903d6c2b8       coredns-7d764666f9-tnm7t                    kube-system
	039396cb91d81       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   39d809be5966e       storage-provisioner                         kube-system
	900dadb5f375a       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   d9fa4e8780efd       kindnet-wqxf2                               kube-system
	afb362634bb5c       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                      25 seconds ago      Running             kube-proxy                0                   e934ab29c9e57       kube-proxy-ngf5r                            kube-system
	f48f014a2d45b       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                      36 seconds ago      Running             kube-controller-manager   0                   47be156068e11       kube-controller-manager-no-preload-468539   kube-system
	acd04628f38df       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                      36 seconds ago      Running             etcd                      0                   e5e08c7225a28       etcd-no-preload-468539                      kube-system
	19181627a59d1       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                      36 seconds ago      Running             kube-scheduler            0                   c911dfc746034       kube-scheduler-no-preload-468539            kube-system
	7a82e60d480b5       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                      36 seconds ago      Running             kube-apiserver            0                   63f5c20088bf3       kube-apiserver-no-preload-468539            kube-system
	
	
	==> coredns [4327a126366fde241ede9b8e3edf202da507e51ad80d3a0ef32b901bc284a15f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:48419 - 22786 "HINFO IN 6078392669683591198.7404992045260230156. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.937744467s
	
	
	==> describe nodes <==
	Name:               no-preload-468539
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-468539
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602
	                    minikube.k8s.io/name=no-preload-468539
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_13_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:13:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-468539
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:14:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:14:21 +0000   Wed, 10 Dec 2025 06:13:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:14:21 +0000   Wed, 10 Dec 2025 06:13:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:14:21 +0000   Wed, 10 Dec 2025 06:13:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 06:14:21 +0000   Wed, 10 Dec 2025 06:14:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-468539
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                bc6f673e-f944-4d8e-86ab-fb27468ab4df
	  Boot ID:                    b1b789e7-29ca-41f0-9541-8c4ef16372aa
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-7d764666f9-tnm7t                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-no-preload-468539                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-wqxf2                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-no-preload-468539             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-no-preload-468539    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-ngf5r                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-no-preload-468539             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  28s   node-controller  Node no-preload-468539 event: Registered Node no-preload-468539 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e ac 6a 3a 10 14 08 06
	[  +0.000389] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e1 45 1e 59 dc 08 06
	[ +12.231886] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff aa b6 c3 b5 b8 e1 08 06
	[  +0.018522] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff f6 5b 96 ba 91 6c 08 06
	[Dec10 06:13] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 91 ba 36 9a ae 08 06
	[  +0.002987] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 7f a1 c5 f7 73 08 06
	[  +1.205570] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 04 b2 ab d7 49 08 06
	[  +4.623767] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 10 2d 23 5f e6 08 06
	[  +0.000315] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 5b 96 ba 91 6c 08 06
	[ +12.537493] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 fa d0 2a 46 66 08 06
	[  +0.000395] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 16 04 b2 ab d7 49 08 06
	[ +31.413502] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 1b 61 8f e3 57 08 06
	[  +0.000352] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fa 91 ba 36 9a ae 08 06
	
	
	==> etcd [acd04628f38dfe9edbc2b2bd23e3b6d520f2a9ba8c751bf4f9b94009aca91b1b] <==
	{"level":"info","ts":"2025-12-10T06:13:46.622828Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 1"}
	{"level":"info","ts":"2025-12-10T06:13:46.622842Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-10T06:13:46.622856Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2025-12-10T06:13:46.623367Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-10T06:13:46.623415Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-10T06:13:46.623443Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2025-12-10T06:13:46.623456Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-10T06:13:46.624288Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:no-preload-468539 ClientURLs:[https://192.168.94.2:2379]}","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-10T06:13:46.624327Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-10T06:13:46.624351Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-10T06:13:46.624479Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-10T06:13:46.624601Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-10T06:13:46.624670Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-10T06:13:46.625153Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-10T06:13:46.625260Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-10T06:13:46.625303Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-10T06:13:46.625334Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-10T06:13:46.625462Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-10T06:13:46.625711Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-10T06:13:46.625836Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-10T06:13:46.628014Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-10T06:13:46.628022Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"warn","ts":"2025-12-10T06:13:56.469417Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"109.03755ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:4429"}
	{"level":"info","ts":"2025-12-10T06:13:56.469515Z","caller":"traceutil/trace.go:172","msg":"trace[1097621114] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:380; }","duration":"109.164643ms","start":"2025-12-10T06:13:56.360329Z","end":"2025-12-10T06:13:56.469494Z","steps":["trace[1097621114] 'agreement among raft nodes before linearized reading'  (duration: 49.611471ms)","trace[1097621114] 'range keys from in-memory index tree'  (duration: 59.300488ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T06:13:56.469660Z","caller":"traceutil/trace.go:172","msg":"trace[1074945869] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"129.832836ms","start":"2025-12-10T06:13:56.339800Z","end":"2025-12-10T06:13:56.469633Z","steps":["trace[1074945869] 'process raft request'  (duration: 70.106477ms)","trace[1074945869] 'compare'  (duration: 59.492619ms)"],"step_count":2}
	
	
	==> kernel <==
	 06:14:22 up 56 min,  0 user,  load average: 6.97, 4.69, 2.86
	Linux no-preload-468539 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [900dadb5f375ad651763cf6e3ae1af369e23e31fe7b7b203a5f3f21de657f5e3] <==
	I1210 06:13:58.751106       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 06:13:58.751604       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1210 06:13:58.751901       1 main.go:148] setting mtu 1500 for CNI 
	I1210 06:13:58.751986       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 06:13:58.752005       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T06:13:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 06:13:59.046971       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 06:13:59.046999       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 06:13:59.047010       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 06:13:59.047214       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 06:13:59.447703       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 06:13:59.447732       1 metrics.go:72] Registering metrics
	I1210 06:13:59.447801       1 controller.go:711] "Syncing nftables rules"
	I1210 06:14:09.047515       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1210 06:14:09.047573       1 main.go:301] handling current node
	I1210 06:14:19.051253       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1210 06:14:19.051289       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7a82e60d480b5c9b3d9fe616801a7dc9ee8cf5bf0b67a14495837345780bac43] <==
	I1210 06:13:47.632690       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:13:47.652648       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1210 06:13:47.652710       1 aggregator.go:187] initial CRD sync complete...
	I1210 06:13:47.652723       1 autoregister_controller.go:144] Starting autoregister controller
	I1210 06:13:47.652731       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 06:13:47.652738       1 cache.go:39] Caches are synced for autoregister controller
	I1210 06:13:47.661071       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:13:48.532563       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1210 06:13:48.538638       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1210 06:13:48.538656       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1210 06:13:49.250767       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:13:49.291075       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:13:49.433513       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1210 06:13:49.446421       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1210 06:13:49.447961       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 06:13:49.455122       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:13:49.581307       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 06:13:50.454845       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 06:13:50.470569       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1210 06:13:50.479838       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1210 06:13:55.033495       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:13:55.038234       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:13:55.231528       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 06:13:55.588783       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1210 06:14:20.360682       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:46034: use of closed network connection
	
	
	==> kube-controller-manager [f48f014a2d45b9c5ae328d8fd4cd5874fa0067f75312dfdb97b25bf3e0b08ade] <==
	I1210 06:13:54.384675       1 shared_informer.go:377] "Caches are synced"
	I1210 06:13:54.385894       1 shared_informer.go:377] "Caches are synced"
	I1210 06:13:54.385943       1 shared_informer.go:377] "Caches are synced"
	I1210 06:13:54.385976       1 shared_informer.go:377] "Caches are synced"
	I1210 06:13:54.385978       1 shared_informer.go:377] "Caches are synced"
	I1210 06:13:54.386071       1 shared_informer.go:377] "Caches are synced"
	I1210 06:13:54.386126       1 shared_informer.go:377] "Caches are synced"
	I1210 06:13:54.386198       1 shared_informer.go:377] "Caches are synced"
	I1210 06:13:54.386678       1 shared_informer.go:377] "Caches are synced"
	I1210 06:13:54.386694       1 shared_informer.go:377] "Caches are synced"
	I1210 06:13:54.386758       1 shared_informer.go:377] "Caches are synced"
	I1210 06:13:54.386779       1 shared_informer.go:377] "Caches are synced"
	I1210 06:13:54.386787       1 shared_informer.go:377] "Caches are synced"
	I1210 06:13:54.386815       1 range_allocator.go:177] "Sending events to api server"
	I1210 06:13:54.386878       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1210 06:13:54.386885       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:13:54.386910       1 shared_informer.go:377] "Caches are synced"
	I1210 06:13:54.391033       1 shared_informer.go:377] "Caches are synced"
	I1210 06:13:54.437857       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:13:54.468304       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-468539" podCIDRs=["10.244.0.0/24"]
	I1210 06:13:54.484237       1 shared_informer.go:377] "Caches are synced"
	I1210 06:13:54.484253       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1210 06:13:54.484258       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1210 06:13:54.538290       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:09.386718       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [afb362634bb5cb3a8920da9b12f4a3c5af746ca30b31ed0dcabfc6893545a2b0] <==
	I1210 06:13:56.037904       1 server_linux.go:53] "Using iptables proxy"
	I1210 06:13:56.099217       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:13:56.200142       1 shared_informer.go:377] "Caches are synced"
	I1210 06:13:56.200178       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1210 06:13:56.200269       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:13:56.235976       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:13:56.236117       1 server_linux.go:136] "Using iptables Proxier"
	I1210 06:13:56.242047       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:13:56.242447       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1210 06:13:56.242490       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:13:56.244582       1 config.go:200] "Starting service config controller"
	I1210 06:13:56.244645       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:13:56.244949       1 config.go:309] "Starting node config controller"
	I1210 06:13:56.247620       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:13:56.247638       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:13:56.245018       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:13:56.247650       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:13:56.245017       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:13:56.247665       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:13:56.345811       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 06:13:56.347965       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 06:13:56.348007       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [19181627a59d15038124fa4163e11d037cd92c6d3e1a37281f0e66f1f119efd6] <==
	E1210 06:13:47.591881       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1210 06:13:47.592652       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1210 06:13:47.592758       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1210 06:13:47.592755       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1210 06:13:47.593119       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1210 06:13:47.593258       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1210 06:13:47.593312       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1210 06:13:47.593388       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1210 06:13:48.498399       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1210 06:13:48.499234       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1210 06:13:48.525617       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1210 06:13:48.594738       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1210 06:13:48.595299       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1210 06:13:48.596705       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1210 06:13:48.734573       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1210 06:13:48.737156       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1210 06:13:48.860212       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1210 06:13:48.964459       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1210 06:13:49.000098       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1210 06:13:49.049458       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1210 06:13:49.053341       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1210 06:13:49.057303       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1210 06:13:49.086521       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1210 06:13:49.174540       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1210 06:13:52.286129       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 10 06:13:55 no-preload-468539 kubelet[2200]: I1210 06:13:55.706347    2200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfz94\" (UniqueName: \"kubernetes.io/projected/4c145b17-54cd-42f5-9af6-414709abcb9e-kube-api-access-vfz94\") pod \"kube-proxy-ngf5r\" (UID: \"4c145b17-54cd-42f5-9af6-414709abcb9e\") " pod="kube-system/kube-proxy-ngf5r"
	Dec 10 06:13:55 no-preload-468539 kubelet[2200]: I1210 06:13:55.706370    2200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a29f9d6a-2e74-499d-bf5d-5931eba307bd-xtables-lock\") pod \"kindnet-wqxf2\" (UID: \"a29f9d6a-2e74-499d-bf5d-5931eba307bd\") " pod="kube-system/kindnet-wqxf2"
	Dec 10 06:13:55 no-preload-468539 kubelet[2200]: I1210 06:13:55.706392    2200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a29f9d6a-2e74-499d-bf5d-5931eba307bd-lib-modules\") pod \"kindnet-wqxf2\" (UID: \"a29f9d6a-2e74-499d-bf5d-5931eba307bd\") " pod="kube-system/kindnet-wqxf2"
	Dec 10 06:13:55 no-preload-468539 kubelet[2200]: I1210 06:13:55.706413    2200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crcgr\" (UniqueName: \"kubernetes.io/projected/a29f9d6a-2e74-499d-bf5d-5931eba307bd-kube-api-access-crcgr\") pod \"kindnet-wqxf2\" (UID: \"a29f9d6a-2e74-499d-bf5d-5931eba307bd\") " pod="kube-system/kindnet-wqxf2"
	Dec 10 06:13:55 no-preload-468539 kubelet[2200]: I1210 06:13:55.706435    2200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4c145b17-54cd-42f5-9af6-414709abcb9e-kube-proxy\") pod \"kube-proxy-ngf5r\" (UID: \"4c145b17-54cd-42f5-9af6-414709abcb9e\") " pod="kube-system/kube-proxy-ngf5r"
	Dec 10 06:13:55 no-preload-468539 kubelet[2200]: I1210 06:13:55.706459    2200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c145b17-54cd-42f5-9af6-414709abcb9e-xtables-lock\") pod \"kube-proxy-ngf5r\" (UID: \"4c145b17-54cd-42f5-9af6-414709abcb9e\") " pod="kube-system/kube-proxy-ngf5r"
	Dec 10 06:13:56 no-preload-468539 kubelet[2200]: I1210 06:13:56.471938    2200 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-ngf5r" podStartSLOduration=1.4719149009999999 podStartE2EDuration="1.471914901s" podCreationTimestamp="2025-12-10 06:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:13:56.471729966 +0000 UTC m=+6.290753424" watchObservedRunningTime="2025-12-10 06:13:56.471914901 +0000 UTC m=+6.290938342"
	Dec 10 06:13:58 no-preload-468539 kubelet[2200]: E1210 06:13:58.607974    2200 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-468539" containerName="etcd"
	Dec 10 06:13:59 no-preload-468539 kubelet[2200]: E1210 06:13:59.271492    2200 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-468539" containerName="kube-controller-manager"
	Dec 10 06:13:59 no-preload-468539 kubelet[2200]: I1210 06:13:59.356993    2200 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-wqxf2" podStartSLOduration=1.84933843 podStartE2EDuration="4.356976177s" podCreationTimestamp="2025-12-10 06:13:55 +0000 UTC" firstStartedPulling="2025-12-10 06:13:55.9459747 +0000 UTC m=+5.764998133" lastFinishedPulling="2025-12-10 06:13:58.453612459 +0000 UTC m=+8.272635880" observedRunningTime="2025-12-10 06:13:59.356896379 +0000 UTC m=+9.175919819" watchObservedRunningTime="2025-12-10 06:13:59.356976177 +0000 UTC m=+9.175999616"
	Dec 10 06:14:01 no-preload-468539 kubelet[2200]: E1210 06:14:01.695169    2200 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-468539" containerName="kube-apiserver"
	Dec 10 06:14:03 no-preload-468539 kubelet[2200]: E1210 06:14:03.843927    2200 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-468539" containerName="kube-scheduler"
	Dec 10 06:14:08 no-preload-468539 kubelet[2200]: E1210 06:14:08.610009    2200 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-468539" containerName="etcd"
	Dec 10 06:14:09 no-preload-468539 kubelet[2200]: I1210 06:14:09.212385    2200 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 10 06:14:09 no-preload-468539 kubelet[2200]: E1210 06:14:09.276048    2200 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-468539" containerName="kube-controller-manager"
	Dec 10 06:14:09 no-preload-468539 kubelet[2200]: I1210 06:14:09.313191    2200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab2d98ba-d61e-424b-865b-bef2fcfc0c41-config-volume\") pod \"coredns-7d764666f9-tnm7t\" (UID: \"ab2d98ba-d61e-424b-865b-bef2fcfc0c41\") " pod="kube-system/coredns-7d764666f9-tnm7t"
	Dec 10 06:14:09 no-preload-468539 kubelet[2200]: I1210 06:14:09.313230    2200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wskx\" (UniqueName: \"kubernetes.io/projected/2fe266e2-db83-4749-8cbf-604a7be68986-kube-api-access-4wskx\") pod \"storage-provisioner\" (UID: \"2fe266e2-db83-4749-8cbf-604a7be68986\") " pod="kube-system/storage-provisioner"
	Dec 10 06:14:09 no-preload-468539 kubelet[2200]: I1210 06:14:09.313248    2200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg94w\" (UniqueName: \"kubernetes.io/projected/ab2d98ba-d61e-424b-865b-bef2fcfc0c41-kube-api-access-zg94w\") pod \"coredns-7d764666f9-tnm7t\" (UID: \"ab2d98ba-d61e-424b-865b-bef2fcfc0c41\") " pod="kube-system/coredns-7d764666f9-tnm7t"
	Dec 10 06:14:09 no-preload-468539 kubelet[2200]: I1210 06:14:09.313266    2200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2fe266e2-db83-4749-8cbf-604a7be68986-tmp\") pod \"storage-provisioner\" (UID: \"2fe266e2-db83-4749-8cbf-604a7be68986\") " pod="kube-system/storage-provisioner"
	Dec 10 06:14:10 no-preload-468539 kubelet[2200]: E1210 06:14:10.371533    2200 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-tnm7t" containerName="coredns"
	Dec 10 06:14:10 no-preload-468539 kubelet[2200]: I1210 06:14:10.384191    2200 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-tnm7t" podStartSLOduration=15.384172282 podStartE2EDuration="15.384172282s" podCreationTimestamp="2025-12-10 06:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:14:10.38403808 +0000 UTC m=+20.203061541" watchObservedRunningTime="2025-12-10 06:14:10.384172282 +0000 UTC m=+20.203195723"
	Dec 10 06:14:10 no-preload-468539 kubelet[2200]: I1210 06:14:10.404661    2200 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.404641972 podStartE2EDuration="14.404641972s" podCreationTimestamp="2025-12-10 06:13:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:14:10.394730061 +0000 UTC m=+20.213753502" watchObservedRunningTime="2025-12-10 06:14:10.404641972 +0000 UTC m=+20.223665411"
	Dec 10 06:14:11 no-preload-468539 kubelet[2200]: E1210 06:14:11.375785    2200 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-tnm7t" containerName="coredns"
	Dec 10 06:14:12 no-preload-468539 kubelet[2200]: I1210 06:14:12.333564    2200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb79n\" (UniqueName: \"kubernetes.io/projected/bae410c4-fff9-404e-a09d-794d0f6bd59d-kube-api-access-lb79n\") pod \"busybox\" (UID: \"bae410c4-fff9-404e-a09d-794d0f6bd59d\") " pod="default/busybox"
	Dec 10 06:14:12 no-preload-468539 kubelet[2200]: E1210 06:14:12.378572    2200 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-tnm7t" containerName="coredns"
	
	
	==> storage-provisioner [039396cb91d8196711e48f787f2d7d2f8cb5af863b63b72e2ad4d723018e2b6c] <==
	I1210 06:14:09.595161       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 06:14:09.605123       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 06:14:09.605237       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1210 06:14:09.607663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:09.614006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:14:09.614158       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 06:14:09.614220       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3bd6fb81-e34e-4509-b3cd-dcebd24f16e8", APIVersion:"v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-468539_e1aecaaf-c0de-476a-aa40-e185c1611fef became leader
	I1210 06:14:09.614296       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-468539_e1aecaaf-c0de-476a-aa40-e185c1611fef!
	W1210 06:14:09.616918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:09.622544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:14:09.714594       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-468539_e1aecaaf-c0de-476a-aa40-e185c1611fef!
	W1210 06:14:11.626166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:11.630231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:13.633518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:13.638126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:15.642248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:15.647307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:17.650778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:17.655568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:19.659012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:19.704400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:21.708226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:21.712899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-468539 -n no-preload-468539
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-468539 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-028500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-028500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (271.868438ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:14:38Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-028500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-028500 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-028500 describe deploy/metrics-server -n kube-system: exit status 1 (63.974223ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-028500 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-028500
helpers_test.go:244: (dbg) docker inspect embed-certs-028500:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "07156149803fd67a2c09058253090db2d9ca551a1a8d785f8bb58a1a70a730ef",
	        "Created": "2025-12-10T06:13:43.905625825Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 367124,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:13:43.933317463Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/07156149803fd67a2c09058253090db2d9ca551a1a8d785f8bb58a1a70a730ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/07156149803fd67a2c09058253090db2d9ca551a1a8d785f8bb58a1a70a730ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/07156149803fd67a2c09058253090db2d9ca551a1a8d785f8bb58a1a70a730ef/hosts",
	        "LogPath": "/var/lib/docker/containers/07156149803fd67a2c09058253090db2d9ca551a1a8d785f8bb58a1a70a730ef/07156149803fd67a2c09058253090db2d9ca551a1a8d785f8bb58a1a70a730ef-json.log",
	        "Name": "/embed-certs-028500",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-028500:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-028500",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "07156149803fd67a2c09058253090db2d9ca551a1a8d785f8bb58a1a70a730ef",
	                "LowerDir": "/var/lib/docker/overlay2/4a3e4550b9f669f53b5c53505cbd7f6642f82125ec165205e90e6aa1a35c4b9d-init/diff:/var/lib/docker/overlay2/b62e2f8db4877fd6b32453256d2aeab173581bfdfbed6c87a5c3b6dd49dbb983/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4a3e4550b9f669f53b5c53505cbd7f6642f82125ec165205e90e6aa1a35c4b9d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4a3e4550b9f669f53b5c53505cbd7f6642f82125ec165205e90e6aa1a35c4b9d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4a3e4550b9f669f53b5c53505cbd7f6642f82125ec165205e90e6aa1a35c4b9d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-028500",
	                "Source": "/var/lib/docker/volumes/embed-certs-028500/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-028500",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-028500",
	                "name.minikube.sigs.k8s.io": "embed-certs-028500",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3df830ed173aea56b1cb8b072ce00d3c60639f69554c4d62d4434be8e1e00e7f",
	            "SandboxKey": "/var/run/docker/netns/3df830ed173a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-028500": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b8125d4cfb05aa73cd2f2d202e5458638ebd5752e96171ba51a763c87ba4071f",
	                    "EndpointID": "d06ca7b09641f2d02dd457de505f04b4c7a64a21c4ac5f1b3271187492ca590e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "46:f7:c8:51:92:20",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-028500",
	                        "07156149803f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-028500 -n embed-certs-028500
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-028500 logs -n 25
helpers_test.go:261: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-094798 sudo cat /var/lib/kubelet/config.yaml                                                                                                                   │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo systemctl status docker --all --full --no-pager                                                                                                    │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ ssh     │ -p bridge-094798 sudo systemctl cat docker --no-pager                                                                                                                    │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ ssh     │ -p bridge-094798 sudo docker system info                                                                                                                                 │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ ssh     │ -p bridge-094798 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ ssh     │ -p bridge-094798 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ ssh     │ -p bridge-094798 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo cri-dockerd --version                                                                                                                              │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ ssh     │ -p bridge-094798 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo containerd config dump                                                                                                                             │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo crio config                                                                                                                                        │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ delete  │ -p bridge-094798                                                                                                                                                         │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ delete  │ -p disable-driver-mounts-569732                                                                                                                                          │ disable-driver-mounts-569732 │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ start   │ -p default-k8s-diff-port-125336 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3 │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-468539 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ stop    │ -p no-preload-468539 --alsologtostderr -v=3                                                                                                                              │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-028500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:14:11
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:14:11.651260  377144 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:14:11.651548  377144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:14:11.651557  377144 out.go:374] Setting ErrFile to fd 2...
	I1210 06:14:11.651565  377144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:14:11.651834  377144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 06:14:11.652351  377144 out.go:368] Setting JSON to false
	I1210 06:14:11.653721  377144 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3396,"bootTime":1765343856,"procs":450,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:14:11.653790  377144 start.go:143] virtualization: kvm guest
	I1210 06:14:11.655560  377144 out.go:179] * [default-k8s-diff-port-125336] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:14:11.657730  377144 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:14:11.657730  377144 notify.go:221] Checking for updates...
	I1210 06:14:11.659887  377144 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:14:11.660913  377144 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:14:11.661987  377144 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube
	I1210 06:14:11.662969  377144 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:14:11.663981  377144 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:14:11.665492  377144 config.go:182] Loaded profile config "embed-certs-028500": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:14:11.665579  377144 config.go:182] Loaded profile config "no-preload-468539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:14:11.665647  377144 config.go:182] Loaded profile config "old-k8s-version-725426": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1210 06:14:11.665729  377144 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:14:11.688895  377144 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:14:11.688997  377144 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:14:11.747939  377144 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-10 06:14:11.738154514 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:14:11.748074  377144 docker.go:319] overlay module found
	I1210 06:14:11.749759  377144 out.go:179] * Using the docker driver based on user configuration
	I1210 06:14:11.750795  377144 start.go:309] selected driver: docker
	I1210 06:14:11.750813  377144 start.go:927] validating driver "docker" against <nil>
	I1210 06:14:11.750828  377144 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:14:11.751584  377144 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:14:11.809247  377144 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-10 06:14:11.799219659 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:14:11.809429  377144 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 06:14:11.809688  377144 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:14:11.811338  377144 out.go:179] * Using Docker driver with root privileges
	I1210 06:14:11.812431  377144 cni.go:84] Creating CNI manager for ""
	I1210 06:14:11.812489  377144 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:14:11.812499  377144 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 06:14:11.812545  377144 start.go:353] cluster config:
	{Name:default-k8s-diff-port-125336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:14:11.813569  377144 out.go:179] * Starting "default-k8s-diff-port-125336" primary control-plane node in "default-k8s-diff-port-125336" cluster
	I1210 06:14:11.814639  377144 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:14:11.815633  377144 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:14:11.816575  377144 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 06:14:11.816671  377144 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 06:14:11.836470  377144 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:14:11.836486  377144 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 06:14:11.846429  377144 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1210 06:14:11.928669  377144 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1210 06:14:11.928793  377144 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/config.json ...
	I1210 06:14:11.928821  377144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/config.json: {Name:mkf8b351fe32c3f192619433d4ef62158eb42523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:14:11.928972  377144 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:14:11.928991  377144 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:14:11.929016  377144 start.go:360] acquireMachinesLock for default-k8s-diff-port-125336: {Name:mk1b9a5beba896eecc2201d27beab95b8159d676 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:11.929072  377144 start.go:364] duration metric: took 41.309µs to acquireMachinesLock for "default-k8s-diff-port-125336"
	I1210 06:14:11.929113  377144 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-125336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:14:11.929179  377144 start.go:125] createHost starting for "" (driver="docker")
	I1210 06:14:10.842152  358054 pod_ready.go:94] pod "kube-controller-manager-no-preload-468539" is "Ready"
	I1210 06:14:10.842177  358054 pod_ready.go:86] duration metric: took 311.35596ms for pod "kube-controller-manager-no-preload-468539" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:14:11.043049  358054 pod_ready.go:83] waiting for pod "kube-proxy-ngf5r" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:14:11.442593  358054 pod_ready.go:94] pod "kube-proxy-ngf5r" is "Ready"
	I1210 06:14:11.442623  358054 pod_ready.go:86] duration metric: took 399.547178ms for pod "kube-proxy-ngf5r" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:14:11.643098  358054 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-468539" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:14:12.041931  358054 pod_ready.go:94] pod "kube-scheduler-no-preload-468539" is "Ready"
	I1210 06:14:12.041965  358054 pod_ready.go:86] duration metric: took 398.845942ms for pod "kube-scheduler-no-preload-468539" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:14:12.041980  358054 pod_ready.go:40] duration metric: took 1.604187472s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:14:12.094375  358054 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-rc.1 (minor skew: 1)
	I1210 06:14:12.095957  358054 out.go:179] * Done! kubectl is now configured to use "no-preload-468539" cluster and "default" namespace by default
	W1210 06:14:11.061582  369109 pod_ready.go:104] pod "coredns-5dd5756b68-vxb6d" is not "Ready", error: <nil>
	W1210 06:14:13.561647  369109 pod_ready.go:104] pod "coredns-5dd5756b68-vxb6d" is not "Ready", error: <nil>
	I1210 06:14:11.710186  366268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:12.210063  366268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:12.710331  366268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:13.209987  366268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:13.710522  366268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:14.209629  366268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:14.710541  366268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:15.210339  366268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:15.297604  366268 kubeadm.go:1114] duration metric: took 4.176354002s to wait for elevateKubeSystemPrivileges
	I1210 06:14:15.297647  366268 kubeadm.go:403] duration metric: took 14.9119621s to StartCluster
	I1210 06:14:15.297670  366268 settings.go:142] acquiring lock: {Name:mk8c38e27b37253ca8cb2a2adf6342f0db270902 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:14:15.297739  366268 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:14:15.299910  366268 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/kubeconfig: {Name:mkfa60e97179780abb80cddf99075aed36301884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:14:15.300188  366268 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:14:15.300309  366268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 06:14:15.300344  366268 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:14:15.300459  366268 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-028500"
	I1210 06:14:15.300478  366268 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-028500"
	I1210 06:14:15.300484  366268 addons.go:70] Setting default-storageclass=true in profile "embed-certs-028500"
	I1210 06:14:15.300505  366268 config.go:182] Loaded profile config "embed-certs-028500": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:14:15.300510  366268 host.go:66] Checking if "embed-certs-028500" exists ...
	I1210 06:14:15.300510  366268 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-028500"
	I1210 06:14:15.300952  366268 cli_runner.go:164] Run: docker container inspect embed-certs-028500 --format={{.State.Status}}
	I1210 06:14:15.301167  366268 cli_runner.go:164] Run: docker container inspect embed-certs-028500 --format={{.State.Status}}
	I1210 06:14:15.303727  366268 out.go:179] * Verifying Kubernetes components...
	I1210 06:14:15.305179  366268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:14:15.330239  366268 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:14:15.331558  366268 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:14:15.331577  366268 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:14:15.331647  366268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-028500
	I1210 06:14:15.331991  366268 addons.go:239] Setting addon default-storageclass=true in "embed-certs-028500"
	I1210 06:14:15.332037  366268 host.go:66] Checking if "embed-certs-028500" exists ...
	I1210 06:14:15.332509  366268 cli_runner.go:164] Run: docker container inspect embed-certs-028500 --format={{.State.Status}}
	I1210 06:14:15.362720  366268 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:14:15.362744  366268 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:14:15.362788  366268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/embed-certs-028500/id_rsa Username:docker}
	I1210 06:14:15.362810  366268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-028500
	I1210 06:14:15.388163  366268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/embed-certs-028500/id_rsa Username:docker}
	I1210 06:14:15.413733  366268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 06:14:15.476641  366268 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:14:15.485945  366268 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:14:15.511014  366268 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:14:15.624137  366268 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1210 06:14:15.626936  366268 node_ready.go:35] waiting up to 6m0s for node "embed-certs-028500" to be "Ready" ...
	I1210 06:14:15.878101  366268 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1210 06:14:15.879210  366268 addons.go:530] duration metric: took 578.878043ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1210 06:14:16.130955  366268 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-028500" context rescaled to 1 replicas
	I1210 06:14:11.930913  377144 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 06:14:11.931120  377144 start.go:159] libmachine.API.Create for "default-k8s-diff-port-125336" (driver="docker")
	I1210 06:14:11.931144  377144 client.go:173] LocalClient.Create starting
	I1210 06:14:11.931187  377144 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem
	I1210 06:14:11.931212  377144 main.go:143] libmachine: Decoding PEM data...
	I1210 06:14:11.931227  377144 main.go:143] libmachine: Parsing certificate...
	I1210 06:14:11.931283  377144 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem
	I1210 06:14:11.931301  377144 main.go:143] libmachine: Decoding PEM data...
	I1210 06:14:11.931310  377144 main.go:143] libmachine: Parsing certificate...
	I1210 06:14:11.931623  377144 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-125336 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 06:14:11.949312  377144 cli_runner.go:211] docker network inspect default-k8s-diff-port-125336 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 06:14:11.949370  377144 network_create.go:284] running [docker network inspect default-k8s-diff-port-125336] to gather additional debugging logs...
	I1210 06:14:11.949385  377144 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-125336
	W1210 06:14:11.967106  377144 cli_runner.go:211] docker network inspect default-k8s-diff-port-125336 returned with exit code 1
	I1210 06:14:11.967139  377144 network_create.go:287] error running [docker network inspect default-k8s-diff-port-125336]: docker network inspect default-k8s-diff-port-125336: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-125336 not found
	I1210 06:14:11.967150  377144 network_create.go:289] output of [docker network inspect default-k8s-diff-port-125336]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-125336 not found
	
	** /stderr **
	I1210 06:14:11.967288  377144 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:14:11.985789  377144 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9ebf62c95cf7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d2:a8:ac:6e:16:1a} reservation:<nil>}
	I1210 06:14:11.986604  377144 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ad22705e186e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:52:8a:92:75:2c:7b} reservation:<nil>}
	I1210 06:14:11.987371  377144 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-782a6994f202 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3e:35:84:e8:81:18} reservation:<nil>}
	I1210 06:14:11.987894  377144 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b1ead66c643d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:fe:c5:90:28:3d:ff} reservation:<nil>}
	I1210 06:14:11.988552  377144 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-b8125d4cfb05 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:b2:71:f2:00:8c:13} reservation:<nil>}
	I1210 06:14:11.989161  377144 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-8043b9026321 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:82:a4:a7:52:6e:bc} reservation:<nil>}
	I1210 06:14:11.990015  377144 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001feb7f0}
	I1210 06:14:11.990041  377144 network_create.go:124] attempt to create docker network default-k8s-diff-port-125336 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1210 06:14:11.990122  377144 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-125336 default-k8s-diff-port-125336
	I1210 06:14:12.040138  377144 network_create.go:108] docker network default-k8s-diff-port-125336 192.168.103.0/24 created
	I1210 06:14:12.040166  377144 kic.go:121] calculated static IP "192.168.103.2" for the "default-k8s-diff-port-125336" container
	I1210 06:14:12.040232  377144 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 06:14:12.061114  377144 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-125336 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-125336 --label created_by.minikube.sigs.k8s.io=true
	I1210 06:14:12.066471  377144 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:14:12.082233  377144 oci.go:103] Successfully created a docker volume default-k8s-diff-port-125336
	I1210 06:14:12.082309  377144 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-125336-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-125336 --entrypoint /usr/bin/test -v default-k8s-diff-port-125336:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 06:14:12.221712  377144 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:14:12.386524  377144 cache.go:107] acquiring lock: {Name:mkc3a95f67321b2fa8faeb966829fb60cf65d25d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:12.386562  377144 cache.go:107] acquiring lock: {Name:mkdd768341d1a3481ecaec697219b32d4a715834 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:12.386572  377144 cache.go:107] acquiring lock: {Name:mkcb073544c2d92de0e0765e38c37b4f4d2ac46b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:12.386568  377144 cache.go:107] acquiring lock: {Name:mk4839690ba979036496a7cee1de2814aaad3bf0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:12.386531  377144 cache.go:107] acquiring lock: {Name:mk4d792f4bac33dc8779d7cc5ff40393c94e0ea4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:12.386525  377144 cache.go:107] acquiring lock: {Name:mk0763a50664c56b0862900e71862307cba94d4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:12.386692  377144 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 exists
	I1210 06:14:12.386704  377144 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1210 06:14:12.386664  377144 cache.go:107] acquiring lock: {Name:mkd670cede0997c7eb0e9bd388a82e1cb2741031 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:12.386723  377144 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 160.066µs
	I1210 06:14:12.386735  377144 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1210 06:14:12.386744  377144 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:14:12.386763  377144 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 260.42µs
	I1210 06:14:12.386773  377144 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:14:12.386705  377144 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3" took 184.524µs
	I1210 06:14:12.386790  377144 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 succeeded
	I1210 06:14:12.386788  377144 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 exists
	I1210 06:14:12.386675  377144 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 exists
	I1210 06:14:12.386795  377144 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 exists
	I1210 06:14:12.386804  377144 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3" took 246.574µs
	I1210 06:14:12.386808  377144 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3" took 231.72µs
	I1210 06:14:12.386812  377144 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3" took 306.425µs
	I1210 06:14:12.386818  377144 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 succeeded
	I1210 06:14:12.386778  377144 cache.go:107] acquiring lock: {Name:mk796942baeaa838a47daad2be5ca7532234da42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:12.386823  377144 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 succeeded
	I1210 06:14:12.386827  377144 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 succeeded
	I1210 06:14:12.386810  377144 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:14:12.386876  377144 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 246.731µs
	I1210 06:14:12.386894  377144 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:14:12.386858  377144 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1210 06:14:12.386905  377144 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 153.287µs
	I1210 06:14:12.386917  377144 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1210 06:14:12.386924  377144 cache.go:87] Successfully saved all images to host disk.
	I1210 06:14:12.513833  377144 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-125336
	I1210 06:14:12.513914  377144 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	W1210 06:14:12.514094  377144 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1210 06:14:12.514139  377144 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1210 06:14:12.514194  377144 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 06:14:12.585914  377144 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-125336 --name default-k8s-diff-port-125336 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-125336 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-125336 --network default-k8s-diff-port-125336 --ip 192.168.103.2 --volume default-k8s-diff-port-125336:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 06:14:12.888807  377144 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Running}}
	I1210 06:14:12.910122  377144 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:14:12.929981  377144 cli_runner.go:164] Run: docker exec default-k8s-diff-port-125336 stat /var/lib/dpkg/alternatives/iptables
	I1210 06:14:12.982113  377144 oci.go:144] the created container "default-k8s-diff-port-125336" has a running status.
	I1210 06:14:12.982150  377144 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa...
	I1210 06:14:13.034115  377144 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 06:14:13.062572  377144 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:14:13.084243  377144 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 06:14:13.084271  377144 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-125336 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 06:14:13.136379  377144 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:14:13.162867  377144 machine.go:94] provisionDockerMachine start ...
	I1210 06:14:13.162975  377144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:14:13.187567  377144 main.go:143] libmachine: Using SSH client type: native
	I1210 06:14:13.187909  377144 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1210 06:14:13.187937  377144 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:14:13.188677  377144 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53622->127.0.0.1:33113: read: connection reset by peer
	I1210 06:14:16.337950  377144 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-125336
	
	I1210 06:14:16.337978  377144 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-125336"
	I1210 06:14:16.338040  377144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:14:16.359688  377144 main.go:143] libmachine: Using SSH client type: native
	I1210 06:14:16.359996  377144 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1210 06:14:16.360021  377144 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-125336 && echo "default-k8s-diff-port-125336" | sudo tee /etc/hostname
	I1210 06:14:16.512022  377144 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-125336
	
	I1210 06:14:16.512118  377144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:14:16.537348  377144 main.go:143] libmachine: Using SSH client type: native
	I1210 06:14:16.537653  377144 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1210 06:14:16.537683  377144 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-125336' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-125336/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-125336' | sudo tee -a /etc/hosts; 
				fi
			fi
	W1210 06:14:15.566505  369109 pod_ready.go:104] pod "coredns-5dd5756b68-vxb6d" is not "Ready", error: <nil>
	W1210 06:14:18.062518  369109 pod_ready.go:104] pod "coredns-5dd5756b68-vxb6d" is not "Ready", error: <nil>
	I1210 06:14:16.696513  377144 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:14:16.696542  377144 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-5725/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-5725/.minikube}
	I1210 06:14:16.698419  377144 ubuntu.go:190] setting up certificates
	I1210 06:14:16.698440  377144 provision.go:84] configureAuth start
	I1210 06:14:16.698509  377144 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-125336
	I1210 06:14:16.724121  377144 provision.go:143] copyHostCerts
	I1210 06:14:16.724189  377144 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem, removing ...
	I1210 06:14:16.724199  377144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem
	I1210 06:14:16.724270  377144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem (1679 bytes)
	I1210 06:14:16.724396  377144 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem, removing ...
	I1210 06:14:16.724406  377144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem
	I1210 06:14:16.724449  377144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem (1078 bytes)
	I1210 06:14:16.724540  377144 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem, removing ...
	I1210 06:14:16.724546  377144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem
	I1210 06:14:16.724584  377144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem (1123 bytes)
	I1210 06:14:16.724664  377144 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-125336 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-125336 localhost minikube]
	I1210 06:14:16.768859  377144 provision.go:177] copyRemoteCerts
	I1210 06:14:16.768933  377144 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:14:16.768989  377144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:14:16.794694  377144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:14:16.909826  377144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:14:16.937019  377144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1210 06:14:16.963443  377144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:14:16.988991  377144 provision.go:87] duration metric: took 290.528609ms to configureAuth
	I1210 06:14:16.989022  377144 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:14:16.989233  377144 config.go:182] Loaded profile config "default-k8s-diff-port-125336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:14:16.989371  377144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:14:17.012694  377144 main.go:143] libmachine: Using SSH client type: native
	I1210 06:14:17.013181  377144 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1210 06:14:17.013233  377144 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:14:17.375999  377144 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:14:17.376169  377144 machine.go:97] duration metric: took 4.213275938s to provisionDockerMachine
	I1210 06:14:17.376199  377144 client.go:176] duration metric: took 5.445047641s to LocalClient.Create
	I1210 06:14:17.376245  377144 start.go:167] duration metric: took 5.4451166s to libmachine.API.Create "default-k8s-diff-port-125336"
	I1210 06:14:17.376259  377144 start.go:293] postStartSetup for "default-k8s-diff-port-125336" (driver="docker")
	I1210 06:14:17.376271  377144 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:14:17.376335  377144 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:14:17.376397  377144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:14:17.403347  377144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:14:17.516489  377144 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:14:17.521458  377144 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:14:17.521495  377144 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:14:17.521507  377144 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/addons for local assets ...
	I1210 06:14:17.521564  377144 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/files for local assets ...
	I1210 06:14:17.521689  377144 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem -> 92532.pem in /etc/ssl/certs
	I1210 06:14:17.521934  377144 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:14:17.533674  377144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:14:17.559335  377144 start.go:296] duration metric: took 183.063493ms for postStartSetup
	I1210 06:14:17.559741  377144 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-125336
	I1210 06:14:17.583240  377144 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/config.json ...
	I1210 06:14:17.583500  377144 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:14:17.583555  377144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:14:17.605976  377144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:14:17.708723  377144 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:14:17.715304  377144 start.go:128] duration metric: took 5.786110546s to createHost
	I1210 06:14:17.715331  377144 start.go:83] releasing machines lock for "default-k8s-diff-port-125336", held for 5.786226817s
	I1210 06:14:17.715406  377144 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-125336
	I1210 06:14:17.738408  377144 ssh_runner.go:195] Run: cat /version.json
	I1210 06:14:17.738471  377144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:14:17.738725  377144 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:14:17.738840  377144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:14:17.762731  377144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:14:17.764610  377144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:14:17.866930  377144 ssh_runner.go:195] Run: systemctl --version
	I1210 06:14:17.951907  377144 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:14:18.000720  377144 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:14:18.008012  377144 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:14:18.008101  377144 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:14:18.040030  377144 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 06:14:18.040059  377144 start.go:496] detecting cgroup driver to use...
	I1210 06:14:18.040109  377144 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:14:18.040160  377144 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:14:18.063732  377144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:14:18.079291  377144 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:14:18.079349  377144 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:14:18.102616  377144 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:14:18.125181  377144 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:14:18.239393  377144 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:14:18.371586  377144 docker.go:234] disabling docker service ...
	I1210 06:14:18.371649  377144 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:14:18.397030  377144 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:14:18.412819  377144 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:14:18.537461  377144 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:14:18.637223  377144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:14:18.649745  377144 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:14:18.664652  377144 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:14:18.795096  377144 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:14:18.795149  377144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:14:18.806688  377144 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:14:18.806738  377144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:14:18.815694  377144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:14:18.825095  377144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:14:18.835440  377144 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:14:18.844541  377144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:14:18.852993  377144 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:14:18.866450  377144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:14:18.875360  377144 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:14:18.882773  377144 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:14:18.890978  377144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:14:18.976579  377144 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:14:19.886556  377144 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:14:19.886633  377144 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:14:19.890738  377144 start.go:564] Will wait 60s for crictl version
	I1210 06:14:19.890803  377144 ssh_runner.go:195] Run: which crictl
	I1210 06:14:19.894176  377144 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:14:19.918653  377144 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:14:19.918732  377144 ssh_runner.go:195] Run: crio --version
	I1210 06:14:19.945335  377144 ssh_runner.go:195] Run: crio --version
	I1210 06:14:19.974121  377144 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	W1210 06:14:17.633410  366268 node_ready.go:57] node "embed-certs-028500" has "Ready":"False" status (will retry)
	W1210 06:14:20.130001  366268 node_ready.go:57] node "embed-certs-028500" has "Ready":"False" status (will retry)
	I1210 06:14:19.975295  377144 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-125336 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:14:19.992955  377144 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1210 06:14:19.996936  377144 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:14:20.007160  377144 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-125336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:14:20.007353  377144 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:14:20.139548  377144 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:14:20.269296  377144 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:14:20.410840  377144 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 06:14:20.410911  377144 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:14:20.438434  377144 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.3". assuming images are not preloaded.
	I1210 06:14:20.438460  377144 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.3 registry.k8s.io/kube-controller-manager:v1.34.3 registry.k8s.io/kube-scheduler:v1.34.3 registry.k8s.io/kube-proxy:v1.34.3 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 06:14:20.438514  377144 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:14:20.438531  377144 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 06:14:20.438592  377144 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1210 06:14:20.438612  377144 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 06:14:20.438627  377144 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 06:14:20.438751  377144 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 06:14:20.438595  377144 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 06:14:20.438614  377144 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 06:14:20.439617  377144 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 06:14:20.439763  377144 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 06:14:20.439783  377144 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1210 06:14:20.439788  377144 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 06:14:20.439887  377144 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 06:14:20.439921  377144 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 06:14:20.439993  377144 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:14:20.440199  377144 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 06:14:20.581012  377144 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.3
	I1210 06:14:20.581699  377144 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1210 06:14:20.583688  377144 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 06:14:20.590856  377144 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1210 06:14:20.599003  377144 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.3
	I1210 06:14:20.620039  377144 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1210 06:14:20.625637  377144 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.3
	I1210 06:14:20.632292  377144 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1210 06:14:20.632588  377144 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1210 06:14:20.632681  377144 ssh_runner.go:195] Run: which crictl
	I1210 06:14:20.632518  377144 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.3" does not exist at hash "aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c" in container runtime
	I1210 06:14:20.632825  377144 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 06:14:20.632859  377144 ssh_runner.go:195] Run: which crictl
	I1210 06:14:20.635500  377144 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.3" does not exist at hash "5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942" in container runtime
	I1210 06:14:20.635634  377144 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 06:14:20.635701  377144 ssh_runner.go:195] Run: which crictl
	I1210 06:14:20.642716  377144 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1210 06:14:20.642751  377144 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 06:14:20.642801  377144 ssh_runner.go:195] Run: which crictl
	I1210 06:14:20.648051  377144 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.3" does not exist at hash "aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78" in container runtime
	I1210 06:14:20.648144  377144 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 06:14:20.648181  377144 ssh_runner.go:195] Run: which crictl
	I1210 06:14:20.666341  377144 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1210 06:14:20.666382  377144 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1210 06:14:20.666432  377144 ssh_runner.go:195] Run: which crictl
	I1210 06:14:20.669680  377144 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.3" needs transfer: "registry.k8s.io/kube-proxy:v1.34.3" does not exist at hash "36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691" in container runtime
	I1210 06:14:20.669713  377144 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 06:14:20.669752  377144 ssh_runner.go:195] Run: which crictl
	I1210 06:14:20.669766  377144 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 06:14:20.669786  377144 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 06:14:20.669808  377144 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 06:14:20.669830  377144 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 06:14:20.669851  377144 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 06:14:20.674034  377144 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:14:20.707955  377144 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 06:14:20.708057  377144 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 06:14:20.708126  377144 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 06:14:20.711195  377144 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 06:14:20.711315  377144 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 06:14:20.711439  377144 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 06:14:20.711482  377144 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:14:20.749501  377144 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 06:14:20.751252  377144 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 06:14:20.751471  377144 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 06:14:20.752275  377144 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 06:14:20.754370  377144 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 06:14:20.788960  377144 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:14:20.789021  377144 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 06:14:20.789053  377144 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 06:14:20.789109  377144 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3
	I1210 06:14:20.789181  377144 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3
	I1210 06:14:20.789189  377144 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 06:14:20.789216  377144 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0
	I1210 06:14:20.789243  377144 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 06:14:20.789263  377144 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3
	I1210 06:14:20.789298  377144 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1210 06:14:20.789319  377144 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 06:14:20.809540  377144 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:14:20.824510  377144 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1210 06:14:20.824545  377144 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3
	I1210 06:14:20.824604  377144 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.3': No such file or directory
	I1210 06:14:20.824615  377144 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 06:14:20.824513  377144 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1210 06:14:20.824629  377144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 --> /var/lib/minikube/images/kube-scheduler_v1.34.3 (17393664 bytes)
	I1210 06:14:20.824666  377144 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.3': No such file or directory
	I1210 06:14:20.824673  377144 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1210 06:14:20.824679  377144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 --> /var/lib/minikube/images/kube-controller-manager_v1.34.3 (22830080 bytes)
	I1210 06:14:20.824618  377144 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 06:14:20.824733  377144 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.3': No such file or directory
	I1210 06:14:20.824745  377144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 --> /var/lib/minikube/images/kube-apiserver_v1.34.3 (27075584 bytes)
	I1210 06:14:20.824767  377144 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1210 06:14:20.824777  377144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1210 06:14:20.863337  377144 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.3': No such file or directory
	I1210 06:14:20.863372  377144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 --> /var/lib/minikube/images/kube-proxy_v1.34.3 (25966592 bytes)
	I1210 06:14:20.863393  377144 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1210 06:14:20.863422  377144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1210 06:14:20.863338  377144 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1210 06:14:20.863443  377144 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 06:14:20.863463  377144 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:14:20.863464  377144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1210 06:14:20.863506  377144 ssh_runner.go:195] Run: which crictl
	I1210 06:14:20.926128  377144 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:14:20.954399  377144 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 06:14:20.954451  377144 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1210 06:14:20.991170  377144 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:14:21.390779  377144 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:14:21.390906  377144 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1210 06:14:21.390937  377144 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 06:14:21.390976  377144 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.3
	W1210 06:14:20.560817  369109 pod_ready.go:104] pod "coredns-5dd5756b68-vxb6d" is not "Ready", error: <nil>
	W1210 06:14:22.565487  369109 pod_ready.go:104] pod "coredns-5dd5756b68-vxb6d" is not "Ready", error: <nil>
	W1210 06:14:22.130480  366268 node_ready.go:57] node "embed-certs-028500" has "Ready":"False" status (will retry)
	W1210 06:14:24.630347  366268 node_ready.go:57] node "embed-certs-028500" has "Ready":"False" status (will retry)
	I1210 06:14:22.553413  377144 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.162594558s)
	I1210 06:14:22.553461  377144 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 06:14:22.553537  377144 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.3: (1.162540124s)
	I1210 06:14:22.553552  377144 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:14:22.553560  377144 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 from cache
	I1210 06:14:22.553588  377144 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 06:14:22.553630  377144 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 06:14:22.558308  377144 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 06:14:22.558341  377144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1210 06:14:23.760742  377144 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.3: (1.207091909s)
	I1210 06:14:23.760764  377144 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 from cache
	I1210 06:14:23.760790  377144 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1210 06:14:23.760832  377144 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1210 06:14:25.046299  377144 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (1.285432537s)
	I1210 06:14:25.046331  377144 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1210 06:14:25.046355  377144 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 06:14:25.046397  377144 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 06:14:26.363025  377144 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.3: (1.316590818s)
	I1210 06:14:26.363055  377144 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 from cache
	I1210 06:14:26.363091  377144 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1210 06:14:26.363138  377144 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	W1210 06:14:25.061197  369109 pod_ready.go:104] pod "coredns-5dd5756b68-vxb6d" is not "Ready", error: <nil>
	W1210 06:14:27.062069  369109 pod_ready.go:104] pod "coredns-5dd5756b68-vxb6d" is not "Ready", error: <nil>
	W1210 06:14:26.630418  366268 node_ready.go:57] node "embed-certs-028500" has "Ready":"False" status (will retry)
	W1210 06:14:29.130553  366268 node_ready.go:57] node "embed-certs-028500" has "Ready":"False" status (will retry)
	I1210 06:14:29.630508  366268 node_ready.go:49] node "embed-certs-028500" is "Ready"
	I1210 06:14:29.630536  366268 node_ready.go:38] duration metric: took 14.003567486s for node "embed-certs-028500" to be "Ready" ...
	I1210 06:14:29.630551  366268 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:14:29.630609  366268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:14:29.643597  366268 api_server.go:72] duration metric: took 14.34336877s to wait for apiserver process to appear ...
	I1210 06:14:29.643627  366268 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:14:29.643651  366268 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 06:14:29.648595  366268 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1210 06:14:29.649632  366268 api_server.go:141] control plane version: v1.34.3
	I1210 06:14:29.649659  366268 api_server.go:131] duration metric: took 6.02351ms to wait for apiserver health ...
	I1210 06:14:29.649669  366268 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:14:29.653120  366268 system_pods.go:59] 8 kube-system pods found
	I1210 06:14:29.653146  366268 system_pods.go:61] "coredns-66bc5c9577-8xwfc" [7ad22b4a-5d1a-403a-a57e-69745116eb0c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:14:29.653151  366268 system_pods.go:61] "etcd-embed-certs-028500" [f56da20c-a457-4f29-98f3-3b29ea6fcbf3] Running
	I1210 06:14:29.653162  366268 system_pods.go:61] "kindnet-6gq2z" [cce0711c-ff56-4335-b244-17f0180eb4d4] Running
	I1210 06:14:29.653166  366268 system_pods.go:61] "kube-apiserver-embed-certs-028500" [3965275f-b1f9-4996-99e7-c070bdfa875d] Running
	I1210 06:14:29.653170  366268 system_pods.go:61] "kube-controller-manager-embed-certs-028500" [c513486a-c2d7-4083-acf4-075177467d76] Running
	I1210 06:14:29.653173  366268 system_pods.go:61] "kube-proxy-sr7kh" [0b34d810-7015-47ad-98a2-41d80c02a77e] Running
	I1210 06:14:29.653176  366268 system_pods.go:61] "kube-scheduler-embed-certs-028500" [0a991394-8849-4863-9251-0f883f13c49a] Running
	I1210 06:14:29.653181  366268 system_pods.go:61] "storage-provisioner" [c6fe10b9-7d0d-4911-afc6-65b935770c41] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:14:29.653187  366268 system_pods.go:74] duration metric: took 3.512286ms to wait for pod list to return data ...
	I1210 06:14:29.653196  366268 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:14:29.655485  366268 default_sa.go:45] found service account: "default"
	I1210 06:14:29.655503  366268 default_sa.go:55] duration metric: took 2.30124ms for default service account to be created ...
	I1210 06:14:29.655513  366268 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 06:14:29.658053  366268 system_pods.go:86] 8 kube-system pods found
	I1210 06:14:29.658072  366268 system_pods.go:89] "coredns-66bc5c9577-8xwfc" [7ad22b4a-5d1a-403a-a57e-69745116eb0c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:14:29.658087  366268 system_pods.go:89] "etcd-embed-certs-028500" [f56da20c-a457-4f29-98f3-3b29ea6fcbf3] Running
	I1210 06:14:29.658093  366268 system_pods.go:89] "kindnet-6gq2z" [cce0711c-ff56-4335-b244-17f0180eb4d4] Running
	I1210 06:14:29.658097  366268 system_pods.go:89] "kube-apiserver-embed-certs-028500" [3965275f-b1f9-4996-99e7-c070bdfa875d] Running
	I1210 06:14:29.658102  366268 system_pods.go:89] "kube-controller-manager-embed-certs-028500" [c513486a-c2d7-4083-acf4-075177467d76] Running
	I1210 06:14:29.658106  366268 system_pods.go:89] "kube-proxy-sr7kh" [0b34d810-7015-47ad-98a2-41d80c02a77e] Running
	I1210 06:14:29.658109  366268 system_pods.go:89] "kube-scheduler-embed-certs-028500" [0a991394-8849-4863-9251-0f883f13c49a] Running
	I1210 06:14:29.658114  366268 system_pods.go:89] "storage-provisioner" [c6fe10b9-7d0d-4911-afc6-65b935770c41] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:14:29.658133  366268 retry.go:31] will retry after 246.575708ms: missing components: kube-dns
	I1210 06:14:29.908981  366268 system_pods.go:86] 8 kube-system pods found
	I1210 06:14:29.909017  366268 system_pods.go:89] "coredns-66bc5c9577-8xwfc" [7ad22b4a-5d1a-403a-a57e-69745116eb0c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:14:29.909025  366268 system_pods.go:89] "etcd-embed-certs-028500" [f56da20c-a457-4f29-98f3-3b29ea6fcbf3] Running
	I1210 06:14:29.909033  366268 system_pods.go:89] "kindnet-6gq2z" [cce0711c-ff56-4335-b244-17f0180eb4d4] Running
	I1210 06:14:29.909038  366268 system_pods.go:89] "kube-apiserver-embed-certs-028500" [3965275f-b1f9-4996-99e7-c070bdfa875d] Running
	I1210 06:14:29.909044  366268 system_pods.go:89] "kube-controller-manager-embed-certs-028500" [c513486a-c2d7-4083-acf4-075177467d76] Running
	I1210 06:14:29.909049  366268 system_pods.go:89] "kube-proxy-sr7kh" [0b34d810-7015-47ad-98a2-41d80c02a77e] Running
	I1210 06:14:29.909054  366268 system_pods.go:89] "kube-scheduler-embed-certs-028500" [0a991394-8849-4863-9251-0f883f13c49a] Running
	I1210 06:14:29.909062  366268 system_pods.go:89] "storage-provisioner" [c6fe10b9-7d0d-4911-afc6-65b935770c41] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:14:29.909093  366268 retry.go:31] will retry after 350.605069ms: missing components: kube-dns
	I1210 06:14:30.270884  366268 system_pods.go:86] 8 kube-system pods found
	I1210 06:14:30.270924  366268 system_pods.go:89] "coredns-66bc5c9577-8xwfc" [7ad22b4a-5d1a-403a-a57e-69745116eb0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:14:30.270933  366268 system_pods.go:89] "etcd-embed-certs-028500" [f56da20c-a457-4f29-98f3-3b29ea6fcbf3] Running
	I1210 06:14:30.270940  366268 system_pods.go:89] "kindnet-6gq2z" [cce0711c-ff56-4335-b244-17f0180eb4d4] Running
	I1210 06:14:30.270944  366268 system_pods.go:89] "kube-apiserver-embed-certs-028500" [3965275f-b1f9-4996-99e7-c070bdfa875d] Running
	I1210 06:14:30.270950  366268 system_pods.go:89] "kube-controller-manager-embed-certs-028500" [c513486a-c2d7-4083-acf4-075177467d76] Running
	I1210 06:14:30.270955  366268 system_pods.go:89] "kube-proxy-sr7kh" [0b34d810-7015-47ad-98a2-41d80c02a77e] Running
	I1210 06:14:30.270959  366268 system_pods.go:89] "kube-scheduler-embed-certs-028500" [0a991394-8849-4863-9251-0f883f13c49a] Running
	I1210 06:14:30.270964  366268 system_pods.go:89] "storage-provisioner" [c6fe10b9-7d0d-4911-afc6-65b935770c41] Running
	I1210 06:14:30.270973  366268 system_pods.go:126] duration metric: took 615.454224ms to wait for k8s-apps to be running ...
	I1210 06:14:30.270982  366268 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 06:14:30.271028  366268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:14:30.284792  366268 system_svc.go:56] duration metric: took 13.801023ms WaitForService to wait for kubelet
	I1210 06:14:30.284822  366268 kubeadm.go:587] duration metric: took 14.984598006s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:14:30.284850  366268 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:14:30.341327  366268 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:14:30.341359  366268 node_conditions.go:123] node cpu capacity is 8
	I1210 06:14:30.341378  366268 node_conditions.go:105] duration metric: took 56.518905ms to run NodePressure ...
	I1210 06:14:30.341393  366268 start.go:242] waiting for startup goroutines ...
	I1210 06:14:30.341407  366268 start.go:247] waiting for cluster config update ...
	I1210 06:14:30.341420  366268 start.go:256] writing updated cluster config ...
	I1210 06:14:30.391072  366268 ssh_runner.go:195] Run: rm -f paused
	I1210 06:14:30.395500  366268 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:14:30.428023  366268 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8xwfc" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:14:30.432629  366268 pod_ready.go:94] pod "coredns-66bc5c9577-8xwfc" is "Ready"
	I1210 06:14:30.432653  366268 pod_ready.go:86] duration metric: took 4.601688ms for pod "coredns-66bc5c9577-8xwfc" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:14:30.498808  366268 pod_ready.go:83] waiting for pod "etcd-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:14:30.503621  366268 pod_ready.go:94] pod "etcd-embed-certs-028500" is "Ready"
	I1210 06:14:30.503646  366268 pod_ready.go:86] duration metric: took 4.814084ms for pod "etcd-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:14:30.629111  366268 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:14:30.633917  366268 pod_ready.go:94] pod "kube-apiserver-embed-certs-028500" is "Ready"
	I1210 06:14:30.633940  366268 pod_ready.go:86] duration metric: took 4.797703ms for pod "kube-apiserver-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:14:30.636140  366268 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:14:30.800113  366268 pod_ready.go:94] pod "kube-controller-manager-embed-certs-028500" is "Ready"
	I1210 06:14:30.800141  366268 pod_ready.go:86] duration metric: took 163.979673ms for pod "kube-controller-manager-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:14:31.000419  366268 pod_ready.go:83] waiting for pod "kube-proxy-sr7kh" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:14:31.399393  366268 pod_ready.go:94] pod "kube-proxy-sr7kh" is "Ready"
	I1210 06:14:31.399420  366268 pod_ready.go:86] duration metric: took 398.980321ms for pod "kube-proxy-sr7kh" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:14:31.599322  366268 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:14:27.812035  377144 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.448873135s)
	I1210 06:14:27.812061  377144 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1210 06:14:27.812108  377144 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 06:14:27.812157  377144 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 06:14:28.875861  377144 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.3: (1.063682125s)
	I1210 06:14:28.875888  377144 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 from cache
	I1210 06:14:28.875919  377144 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:14:28.875964  377144 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:14:29.445976  377144 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 06:14:29.446015  377144 cache_images.go:125] Successfully loaded all cached images
	I1210 06:14:29.446020  377144 cache_images.go:94] duration metric: took 9.007542123s to LoadCachedImages
	I1210 06:14:29.446032  377144 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.3 crio true true} ...
	I1210 06:14:29.446166  377144 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-125336 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:14:29.446250  377144 ssh_runner.go:195] Run: crio config
	I1210 06:14:29.489949  377144 cni.go:84] Creating CNI manager for ""
	I1210 06:14:29.489968  377144 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:14:29.489984  377144 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:14:29.490004  377144 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-125336 NodeName:default-k8s-diff-port-125336 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:14:29.490140  377144 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-125336"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:14:29.490205  377144 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1210 06:14:29.498630  377144 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.3': No such file or directory
	
	Initiating transfer...
	I1210 06:14:29.498708  377144 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.3
	I1210 06:14:29.507923  377144 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:14:29.507975  377144 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubelet.sha256
	I1210 06:14:29.508006  377144 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl.sha256
	I1210 06:14:29.508033  377144 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubeadm
	I1210 06:14:29.508113  377144 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubectl
	I1210 06:14:29.508036  377144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:14:29.513557  377144 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubectl': No such file or directory
	I1210 06:14:29.513584  377144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/linux/amd64/v1.34.3/kubectl --> /var/lib/minikube/binaries/v1.34.3/kubectl (60563640 bytes)
	I1210 06:14:29.513601  377144 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubeadm': No such file or directory
	I1210 06:14:29.513628  377144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/linux/amd64/v1.34.3/kubeadm --> /var/lib/minikube/binaries/v1.34.3/kubeadm (74027192 bytes)
	I1210 06:14:29.530343  377144 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubelet
	I1210 06:14:29.574915  377144 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubelet': No such file or directory
	I1210 06:14:29.574953  377144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/cache/linux/amd64/v1.34.3/kubelet --> /var/lib/minikube/binaries/v1.34.3/kubelet (59203876 bytes)
	I1210 06:14:29.996218  377144 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:14:30.004407  377144 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1210 06:14:30.016934  377144 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 06:14:30.098011  377144 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1210 06:14:30.168226  377144 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:14:30.172610  377144 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:14:30.183572  377144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:14:30.317121  377144 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:14:30.340683  377144 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336 for IP: 192.168.103.2
	I1210 06:14:30.340710  377144 certs.go:195] generating shared ca certs ...
	I1210 06:14:30.340732  377144 certs.go:227] acquiring lock for ca certs: {Name:mka90f54d579d39a8508aa46a6cef002ccad5d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:14:30.340886  377144 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key
	I1210 06:14:30.340944  377144 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key
	I1210 06:14:30.340956  377144 certs.go:257] generating profile certs ...
	I1210 06:14:30.341021  377144 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/client.key
	I1210 06:14:30.341041  377144 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/client.crt with IP's: []
	I1210 06:14:30.736755  377144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/client.crt ...
	I1210 06:14:30.736781  377144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/client.crt: {Name:mkc5d0f4a62da8348376100fed3658ecc4a70864 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:14:30.736924  377144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/client.key ...
	I1210 06:14:30.736944  377144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/client.key: {Name:mk0dd7a3d86806f561f4a7cb4363f96c05a787b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:14:30.737021  377144 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/apiserver.key.75b93134
	I1210 06:14:30.737038  377144 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/apiserver.crt.75b93134 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1210 06:14:30.862888  377144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/apiserver.crt.75b93134 ...
	I1210 06:14:30.862911  377144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/apiserver.crt.75b93134: {Name:mk429d8a67abcecc8a16f1e64a7f5fde2aa6ad56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:14:30.863100  377144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/apiserver.key.75b93134 ...
	I1210 06:14:30.863118  377144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/apiserver.key.75b93134: {Name:mk9b533abb6bbd95d19cb4eb6955b4829bad9758 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:14:30.863201  377144 certs.go:382] copying /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/apiserver.crt.75b93134 -> /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/apiserver.crt
	I1210 06:14:30.863294  377144 certs.go:386] copying /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/apiserver.key.75b93134 -> /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/apiserver.key
	I1210 06:14:30.863357  377144 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/proxy-client.key
	I1210 06:14:30.863378  377144 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/proxy-client.crt with IP's: []
	I1210 06:14:30.944475  377144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/proxy-client.crt ...
	I1210 06:14:30.944497  377144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/proxy-client.crt: {Name:mk7b7af1575405a0f49635a618726a0ee0e05790 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:14:30.944631  377144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/proxy-client.key ...
	I1210 06:14:30.944644  377144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/proxy-client.key: {Name:mk10549b5d69e43c7484667ec2f8d8d7025290ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:14:30.944827  377144 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253.pem (1338 bytes)
	W1210 06:14:30.944866  377144 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253_empty.pem, impossibly tiny 0 bytes
	I1210 06:14:30.944877  377144 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:14:30.944903  377144 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:14:30.944927  377144 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:14:30.944951  377144 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem (1679 bytes)
	I1210 06:14:30.944994  377144 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:14:30.946252  377144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:14:30.964880  377144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:14:30.982348  377144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:14:30.999031  377144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:14:31.015774  377144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1210 06:14:31.032972  377144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:14:31.049781  377144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:14:31.066984  377144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 06:14:31.083401  377144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:14:31.101775  377144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253.pem --> /usr/share/ca-certificates/9253.pem (1338 bytes)
	I1210 06:14:31.118872  377144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /usr/share/ca-certificates/92532.pem (1708 bytes)
	I1210 06:14:31.137663  377144 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:14:31.150435  377144 ssh_runner.go:195] Run: openssl version
	I1210 06:14:31.157278  377144 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:14:31.164926  377144 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:14:31.172031  377144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:14:31.175672  377144 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:14:31.175719  377144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:14:31.209876  377144 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:14:31.216858  377144 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 06:14:31.224047  377144 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9253.pem
	I1210 06:14:31.231176  377144 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9253.pem /etc/ssl/certs/9253.pem
	I1210 06:14:31.238458  377144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9253.pem
	I1210 06:14:31.242008  377144 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:37 /usr/share/ca-certificates/9253.pem
	I1210 06:14:31.242054  377144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9253.pem
	I1210 06:14:31.275551  377144 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:14:31.282347  377144 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9253.pem /etc/ssl/certs/51391683.0
	I1210 06:14:31.289148  377144 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/92532.pem
	I1210 06:14:31.296046  377144 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/92532.pem /etc/ssl/certs/92532.pem
	I1210 06:14:31.303103  377144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92532.pem
	I1210 06:14:31.307028  377144 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:37 /usr/share/ca-certificates/92532.pem
	I1210 06:14:31.307069  377144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92532.pem
	I1210 06:14:31.340807  377144 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:14:31.348178  377144 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/92532.pem /etc/ssl/certs/3ec20f2e.0
	I1210 06:14:31.355372  377144 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:14:31.358810  377144 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 06:14:31.358863  377144 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-125336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:14:31.358947  377144 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:14:31.358997  377144 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:14:31.385625  377144 cri.go:89] found id: ""
	I1210 06:14:31.385685  377144 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:14:31.393918  377144 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:14:31.402164  377144 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:14:31.402215  377144 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:14:31.409796  377144 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:14:31.409811  377144 kubeadm.go:158] found existing configuration files:
	
	I1210 06:14:31.409842  377144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1210 06:14:31.417120  377144 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:14:31.417154  377144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:14:31.424238  377144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1210 06:14:31.431409  377144 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:14:31.431453  377144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:14:31.438236  377144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1210 06:14:31.445409  377144 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:14:31.445441  377144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:14:31.452345  377144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1210 06:14:31.459413  377144 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:14:31.459446  377144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:14:31.466305  377144 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:14:31.520611  377144 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1210 06:14:31.577498  377144 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:14:31.999807  366268 pod_ready.go:94] pod "kube-scheduler-embed-certs-028500" is "Ready"
	I1210 06:14:31.999832  366268 pod_ready.go:86] duration metric: took 400.489573ms for pod "kube-scheduler-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:14:31.999844  366268 pod_ready.go:40] duration metric: took 1.604312498s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:14:32.051024  366268 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1210 06:14:32.052863  366268 out.go:179] * Done! kubectl is now configured to use "embed-certs-028500" cluster and "default" namespace by default
	W1210 06:14:29.563863  369109 pod_ready.go:104] pod "coredns-5dd5756b68-vxb6d" is not "Ready", error: <nil>
	W1210 06:14:32.062027  369109 pod_ready.go:104] pod "coredns-5dd5756b68-vxb6d" is not "Ready", error: <nil>
	I1210 06:14:34.060887  369109 pod_ready.go:94] pod "coredns-5dd5756b68-vxb6d" is "Ready"
	I1210 06:14:34.060911  369109 pod_ready.go:86] duration metric: took 32.505484011s for pod "coredns-5dd5756b68-vxb6d" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:14:34.063882  369109 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-725426" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:14:34.067510  369109 pod_ready.go:94] pod "etcd-old-k8s-version-725426" is "Ready"
	I1210 06:14:34.067531  369109 pod_ready.go:86] duration metric: took 3.628745ms for pod "etcd-old-k8s-version-725426" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:14:34.069893  369109 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-725426" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:14:34.073959  369109 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-725426" is "Ready"
	I1210 06:14:34.073977  369109 pod_ready.go:86] duration metric: took 4.065313ms for pod "kube-apiserver-old-k8s-version-725426" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:14:34.076386  369109 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-725426" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:14:34.259050  369109 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-725426" is "Ready"
	I1210 06:14:34.259075  369109 pod_ready.go:86] duration metric: took 182.669857ms for pod "kube-controller-manager-old-k8s-version-725426" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:14:34.459629  369109 pod_ready.go:83] waiting for pod "kube-proxy-m59j8" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:14:34.858564  369109 pod_ready.go:94] pod "kube-proxy-m59j8" is "Ready"
	I1210 06:14:34.858589  369109 pod_ready.go:86] duration metric: took 398.937658ms for pod "kube-proxy-m59j8" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:14:35.059120  369109 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-725426" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:14:35.458529  369109 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-725426" is "Ready"
	I1210 06:14:35.458561  369109 pod_ready.go:86] duration metric: took 399.414442ms for pod "kube-scheduler-old-k8s-version-725426" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:14:35.458572  369109 pod_ready.go:40] duration metric: took 33.907740501s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:14:35.501647  369109 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1210 06:14:35.503075  369109 out.go:203] 
	W1210 06:14:35.504132  369109 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1210 06:14:35.505283  369109 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1210 06:14:35.506386  369109 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-725426" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 06:14:29 embed-certs-028500 crio[770]: time="2025-12-10T06:14:29.580551779Z" level=info msg="Starting container: 4b17d1cacf3d57e240f8ae55cd9f58fd48946bb8d0c16db1bc751512a77e618c" id=684b8ead-d04e-416c-9401-2e1731702729 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:14:29 embed-certs-028500 crio[770]: time="2025-12-10T06:14:29.582928169Z" level=info msg="Started container" PID=2936 containerID=4b17d1cacf3d57e240f8ae55cd9f58fd48946bb8d0c16db1bc751512a77e618c description=kube-system/coredns-66bc5c9577-8xwfc/coredns id=684b8ead-d04e-416c-9401-2e1731702729 name=/runtime.v1.RuntimeService/StartContainer sandboxID=81673b9c8a5075569c99ea45de6eedd6049c828d5f94cfcf895836686c51700a
	Dec 10 06:14:32 embed-certs-028500 crio[770]: time="2025-12-10T06:14:32.498305255Z" level=info msg="Running pod sandbox: default/busybox/POD" id=d31d724f-c147-42af-8bbb-56c909120615 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:14:32 embed-certs-028500 crio[770]: time="2025-12-10T06:14:32.498384099Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:14:32 embed-certs-028500 crio[770]: time="2025-12-10T06:14:32.504219501Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:9ec3744c783719eef5b7c7094aac3c766322aee3a91eb0c5e06175fb36995b50 UID:e898aa0d-3dca-4ee0-8728-aca196c5331d NetNS:/var/run/netns/804246eb-5706-4bcf-bf6e-8418c2f00378 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000902390}] Aliases:map[]}"
	Dec 10 06:14:32 embed-certs-028500 crio[770]: time="2025-12-10T06:14:32.50424428Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 10 06:14:32 embed-certs-028500 crio[770]: time="2025-12-10T06:14:32.513799899Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:9ec3744c783719eef5b7c7094aac3c766322aee3a91eb0c5e06175fb36995b50 UID:e898aa0d-3dca-4ee0-8728-aca196c5331d NetNS:/var/run/netns/804246eb-5706-4bcf-bf6e-8418c2f00378 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000902390}] Aliases:map[]}"
	Dec 10 06:14:32 embed-certs-028500 crio[770]: time="2025-12-10T06:14:32.513910357Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 10 06:14:32 embed-certs-028500 crio[770]: time="2025-12-10T06:14:32.514578147Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 06:14:32 embed-certs-028500 crio[770]: time="2025-12-10T06:14:32.515424401Z" level=info msg="Ran pod sandbox 9ec3744c783719eef5b7c7094aac3c766322aee3a91eb0c5e06175fb36995b50 with infra container: default/busybox/POD" id=d31d724f-c147-42af-8bbb-56c909120615 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:14:32 embed-certs-028500 crio[770]: time="2025-12-10T06:14:32.516697172Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=33eb11ba-3f2a-41ba-959e-341e0d37f4fd name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:14:32 embed-certs-028500 crio[770]: time="2025-12-10T06:14:32.516838271Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=33eb11ba-3f2a-41ba-959e-341e0d37f4fd name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:14:32 embed-certs-028500 crio[770]: time="2025-12-10T06:14:32.516901893Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=33eb11ba-3f2a-41ba-959e-341e0d37f4fd name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:14:32 embed-certs-028500 crio[770]: time="2025-12-10T06:14:32.517562551Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c8a668e9-f91a-4759-ae88-751aff8b83cc name=/runtime.v1.ImageService/PullImage
	Dec 10 06:14:32 embed-certs-028500 crio[770]: time="2025-12-10T06:14:32.519585896Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 10 06:14:33 embed-certs-028500 crio[770]: time="2025-12-10T06:14:33.11063096Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=c8a668e9-f91a-4759-ae88-751aff8b83cc name=/runtime.v1.ImageService/PullImage
	Dec 10 06:14:33 embed-certs-028500 crio[770]: time="2025-12-10T06:14:33.111196523Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9ddf4a3f-2a37-422b-b21f-6888a6623191 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:14:33 embed-certs-028500 crio[770]: time="2025-12-10T06:14:33.112394087Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8a68ce54-32f4-4ed4-a920-da2323c6f8f6 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:14:33 embed-certs-028500 crio[770]: time="2025-12-10T06:14:33.115643329Z" level=info msg="Creating container: default/busybox/busybox" id=3a18789e-3344-4832-bfb2-f8da42e1877d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:14:33 embed-certs-028500 crio[770]: time="2025-12-10T06:14:33.115771492Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:14:33 embed-certs-028500 crio[770]: time="2025-12-10T06:14:33.120139834Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:14:33 embed-certs-028500 crio[770]: time="2025-12-10T06:14:33.120648976Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:14:33 embed-certs-028500 crio[770]: time="2025-12-10T06:14:33.143384093Z" level=info msg="Created container 4cde17a55c58388b066428381661e2bc210ea0444ae4504b13e46865992990a1: default/busybox/busybox" id=3a18789e-3344-4832-bfb2-f8da42e1877d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:14:33 embed-certs-028500 crio[770]: time="2025-12-10T06:14:33.143841465Z" level=info msg="Starting container: 4cde17a55c58388b066428381661e2bc210ea0444ae4504b13e46865992990a1" id=f69e5f4c-4b7f-43c2-a158-de4dbf8b6310 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:14:33 embed-certs-028500 crio[770]: time="2025-12-10T06:14:33.145401079Z" level=info msg="Started container" PID=3013 containerID=4cde17a55c58388b066428381661e2bc210ea0444ae4504b13e46865992990a1 description=default/busybox/busybox id=f69e5f4c-4b7f-43c2-a158-de4dbf8b6310 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9ec3744c783719eef5b7c7094aac3c766322aee3a91eb0c5e06175fb36995b50
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	4cde17a55c583       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   6 seconds ago       Running             busybox                   0                   9ec3744c78371       busybox                                      default
	4b17d1cacf3d5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      10 seconds ago      Running             coredns                   0                   81673b9c8a507       coredns-66bc5c9577-8xwfc                     kube-system
	797a9a79e64c5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 seconds ago      Running             storage-provisioner       0                   263e4b4ea041a       storage-provisioner                          kube-system
	456018d94471d       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    21 seconds ago      Running             kindnet-cni               0                   5e0c5fa15f911       kindnet-6gq2z                                kube-system
	5d9272af40e03       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                      23 seconds ago      Running             kube-proxy                0                   591432e6e6c3c       kube-proxy-sr7kh                             kube-system
	15796fb726b80       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                      33 seconds ago      Running             kube-scheduler            0                   25a8525af7308       kube-scheduler-embed-certs-028500            kube-system
	047952ea30259       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                      33 seconds ago      Running             kube-apiserver            0                   eafd30104a700       kube-apiserver-embed-certs-028500            kube-system
	03fafafec0c90       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                      33 seconds ago      Running             kube-controller-manager   0                   23cd62549a6a2       kube-controller-manager-embed-certs-028500   kube-system
	02c64b5df99db       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      33 seconds ago      Running             etcd                      0                   e01d33c044e6b       etcd-embed-certs-028500                      kube-system
	
	
	==> coredns [4b17d1cacf3d57e240f8ae55cd9f58fd48946bb8d0c16db1bc751512a77e618c] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54118 - 61217 "HINFO IN 427919394291300429.7847059564609828255. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.459955857s
	
	
	==> describe nodes <==
	Name:               embed-certs-028500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-028500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602
	                    minikube.k8s.io/name=embed-certs-028500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_14_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:14:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-028500
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:14:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:14:29 +0000   Wed, 10 Dec 2025 06:14:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:14:29 +0000   Wed, 10 Dec 2025 06:14:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:14:29 +0000   Wed, 10 Dec 2025 06:14:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 06:14:29 +0000   Wed, 10 Dec 2025 06:14:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-028500
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                cff73820-6963-4ea9-ae17-4b15b6269bbe
	  Boot ID:                    b1b789e7-29ca-41f0-9541-8c4ef16372aa
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 coredns-66bc5c9577-8xwfc                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-embed-certs-028500                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-6gq2z                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-embed-certs-028500             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-embed-certs-028500    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-sr7kh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-embed-certs-028500             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 29s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s   kubelet          Node embed-certs-028500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s   kubelet          Node embed-certs-028500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s   kubelet          Node embed-certs-028500 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node embed-certs-028500 event: Registered Node embed-certs-028500 in Controller
	  Normal  NodeReady                10s   kubelet          Node embed-certs-028500 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e ac 6a 3a 10 14 08 06
	[  +0.000389] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e1 45 1e 59 dc 08 06
	[ +12.231886] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff aa b6 c3 b5 b8 e1 08 06
	[  +0.018522] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff f6 5b 96 ba 91 6c 08 06
	[Dec10 06:13] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 91 ba 36 9a ae 08 06
	[  +0.002987] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 7f a1 c5 f7 73 08 06
	[  +1.205570] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 04 b2 ab d7 49 08 06
	[  +4.623767] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 10 2d 23 5f e6 08 06
	[  +0.000315] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 5b 96 ba 91 6c 08 06
	[ +12.537493] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 fa d0 2a 46 66 08 06
	[  +0.000395] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 16 04 b2 ab d7 49 08 06
	[ +31.413502] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 1b 61 8f e3 57 08 06
	[  +0.000352] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fa 91 ba 36 9a ae 08 06
	
	
	==> etcd [02c64b5df99dbe7f91c14561155e1cda6c1300b38241f86f2ed24ecf9800971b] <==
	{"level":"warn","ts":"2025-12-10T06:14:06.974126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:06.983196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:06.993114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:07.006024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:07.023414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:07.031765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:07.039923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:07.047287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:07.053893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:07.063129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:07.069870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:07.076792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:07.084099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:07.092043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:07.098750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:07.105344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:07.121599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:07.127864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:07.134624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:07.182744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49354","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-10T06:14:19.287586Z","caller":"traceutil/trace.go:172","msg":"trace[978166807] transaction","detail":"{read_only:false; response_revision:388; number_of_response:1; }","duration":"108.691622ms","start":"2025-12-10T06:14:19.178877Z","end":"2025-12-10T06:14:19.287569Z","steps":["trace[978166807] 'process raft request'  (duration: 108.556075ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T06:14:19.328767Z","caller":"traceutil/trace.go:172","msg":"trace[1331788691] transaction","detail":"{read_only:false; response_revision:389; number_of_response:1; }","duration":"144.744209ms","start":"2025-12-10T06:14:19.183998Z","end":"2025-12-10T06:14:19.328742Z","steps":["trace[1331788691] 'process raft request'  (duration: 144.627785ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T06:14:30.626988Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.220137ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T06:14:30.627092Z","caller":"traceutil/trace.go:172","msg":"trace[695327098] transaction","detail":"{read_only:false; response_revision:422; number_of_response:1; }","duration":"122.595012ms","start":"2025-12-10T06:14:30.504460Z","end":"2025-12-10T06:14:30.627055Z","steps":["trace[695327098] 'process raft request'  (duration: 56.555716ms)","trace[695327098] 'compare'  (duration: 65.926469ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T06:14:30.627109Z","caller":"traceutil/trace.go:172","msg":"trace[226017099] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:421; }","duration":"122.349616ms","start":"2025-12-10T06:14:30.504724Z","end":"2025-12-10T06:14:30.627074Z","steps":["trace[226017099] 'agreement among raft nodes before linearized reading'  (duration: 56.267717ms)","trace[226017099] 'range keys from in-memory index tree'  (duration: 65.916155ms)"],"step_count":2}
	
	
	==> kernel <==
	 06:14:39 up 57 min,  0 user,  load average: 5.84, 4.58, 2.86
	Linux embed-certs-028500 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [456018d94471d6d8028cbcfb90d8ee4b93404c3c86729ad0cb157b854f3a8a49] <==
	I1210 06:14:18.495355       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 06:14:18.495657       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1210 06:14:18.495851       1 main.go:148] setting mtu 1500 for CNI 
	I1210 06:14:18.495872       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 06:14:18.495899       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T06:14:18Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 06:14:18.695846       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 06:14:18.695872       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 06:14:18.695891       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 06:14:18.787850       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 06:14:19.095989       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 06:14:19.096016       1 metrics.go:72] Registering metrics
	I1210 06:14:19.096120       1 controller.go:711] "Syncing nftables rules"
	I1210 06:14:28.700714       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 06:14:28.700795       1 main.go:301] handling current node
	I1210 06:14:38.700214       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 06:14:38.700250       1 main.go:301] handling current node
	
	
	==> kube-apiserver [047952ea302595b6c0eb387399aca2745891b8a6079c04e730dce57f10d78522] <==
	I1210 06:14:07.671744       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1210 06:14:07.672635       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 06:14:07.695704       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1210 06:14:07.695735       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:14:07.701396       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:14:07.704467       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1210 06:14:07.867251       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:14:08.576357       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1210 06:14:08.582017       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1210 06:14:08.582041       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 06:14:09.039793       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:14:09.074597       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:14:09.181056       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1210 06:14:09.187294       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1210 06:14:09.188395       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 06:14:09.192583       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:14:09.602481       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 06:14:10.264261       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 06:14:10.273027       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1210 06:14:10.279432       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1210 06:14:15.372734       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:14:15.381379       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:14:15.507329       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 06:14:15.703817       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1210 06:14:38.300625       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:55278: use of closed network connection
	
	
	==> kube-controller-manager [03fafafec0c90774fcfe112a4354f6a8b2f11da23b52586569db6c0e9ccbd98d] <==
	I1210 06:14:14.573701       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1210 06:14:14.573708       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1210 06:14:14.574874       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1210 06:14:14.574907       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1210 06:14:14.600837       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1210 06:14:14.600903       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 06:14:14.600914       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1210 06:14:14.600920       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1210 06:14:14.600927       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1210 06:14:14.600943       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1210 06:14:14.600962       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1210 06:14:14.601146       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1210 06:14:14.601300       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1210 06:14:14.601306       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1210 06:14:14.601491       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1210 06:14:14.601565       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1210 06:14:14.601676       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1210 06:14:14.602395       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1210 06:14:14.603592       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1210 06:14:14.606768       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1210 06:14:14.608469       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 06:14:14.614587       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1210 06:14:14.621854       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1210 06:14:14.621898       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 06:14:29.532462       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [5d9272af40e036619cf42e933665a3484ac276c5d6977e1a5f901ad126d5f64d] <==
	I1210 06:14:16.139635       1 server_linux.go:53] "Using iptables proxy"
	I1210 06:14:16.223469       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 06:14:16.324059       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 06:14:16.324155       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1210 06:14:16.324233       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:14:16.345844       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:14:16.345901       1 server_linux.go:132] "Using iptables Proxier"
	I1210 06:14:16.351401       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:14:16.351787       1 server.go:527] "Version info" version="v1.34.3"
	I1210 06:14:16.351812       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:14:16.353419       1 config.go:200] "Starting service config controller"
	I1210 06:14:16.353441       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:14:16.353652       1 config.go:309] "Starting node config controller"
	I1210 06:14:16.353813       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:14:16.353832       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:14:16.353858       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:14:16.353947       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:14:16.353877       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:14:16.354061       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:14:16.453646       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 06:14:16.454873       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 06:14:16.454884       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [15796fb726b80771a54bf6099b416331e5825b03b9bf16980d8c7a666c16c62e] <==
	E1210 06:14:07.628321       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 06:14:07.628888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 06:14:07.629063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 06:14:07.629162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 06:14:07.629238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 06:14:07.629602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 06:14:07.629674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 06:14:07.629674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 06:14:07.629787       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 06:14:07.629711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 06:14:07.629830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 06:14:07.629859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 06:14:07.629934       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 06:14:08.446244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 06:14:08.454345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 06:14:08.511720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 06:14:08.514863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 06:14:08.538098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 06:14:08.569645       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 06:14:08.598861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 06:14:08.619428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 06:14:08.623649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 06:14:08.651739       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 06:14:08.918705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1210 06:14:11.423776       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 06:14:11 embed-certs-028500 kubelet[2339]: I1210 06:14:11.147438    2339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-028500" podStartSLOduration=1.147407534 podStartE2EDuration="1.147407534s" podCreationTimestamp="2025-12-10 06:14:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:14:11.141248849 +0000 UTC m=+1.123618618" watchObservedRunningTime="2025-12-10 06:14:11.147407534 +0000 UTC m=+1.129777300"
	Dec 10 06:14:11 embed-certs-028500 kubelet[2339]: I1210 06:14:11.147628    2339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-028500" podStartSLOduration=1.147613477 podStartE2EDuration="1.147613477s" podCreationTimestamp="2025-12-10 06:14:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:14:11.122117434 +0000 UTC m=+1.104487201" watchObservedRunningTime="2025-12-10 06:14:11.147613477 +0000 UTC m=+1.129983244"
	Dec 10 06:14:11 embed-certs-028500 kubelet[2339]: I1210 06:14:11.178992    2339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-028500" podStartSLOduration=1.17897717 podStartE2EDuration="1.17897717s" podCreationTimestamp="2025-12-10 06:14:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:14:11.159491045 +0000 UTC m=+1.141860811" watchObservedRunningTime="2025-12-10 06:14:11.17897717 +0000 UTC m=+1.161346937"
	Dec 10 06:14:11 embed-certs-028500 kubelet[2339]: I1210 06:14:11.187766    2339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-028500" podStartSLOduration=1.18774779 podStartE2EDuration="1.18774779s" podCreationTimestamp="2025-12-10 06:14:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:14:11.178890823 +0000 UTC m=+1.161260590" watchObservedRunningTime="2025-12-10 06:14:11.18774779 +0000 UTC m=+1.170117557"
	Dec 10 06:14:14 embed-certs-028500 kubelet[2339]: I1210 06:14:14.654609    2339 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 10 06:14:14 embed-certs-028500 kubelet[2339]: I1210 06:14:14.655438    2339 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 10 06:14:15 embed-certs-028500 kubelet[2339]: I1210 06:14:15.822598    2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b34d810-7015-47ad-98a2-41d80c02a77e-xtables-lock\") pod \"kube-proxy-sr7kh\" (UID: \"0b34d810-7015-47ad-98a2-41d80c02a77e\") " pod="kube-system/kube-proxy-sr7kh"
	Dec 10 06:14:15 embed-certs-028500 kubelet[2339]: I1210 06:14:15.822653    2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph5vh\" (UniqueName: \"kubernetes.io/projected/0b34d810-7015-47ad-98a2-41d80c02a77e-kube-api-access-ph5vh\") pod \"kube-proxy-sr7kh\" (UID: \"0b34d810-7015-47ad-98a2-41d80c02a77e\") " pod="kube-system/kube-proxy-sr7kh"
	Dec 10 06:14:15 embed-certs-028500 kubelet[2339]: I1210 06:14:15.822685    2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cce0711c-ff56-4335-b244-17f0180eb4d4-lib-modules\") pod \"kindnet-6gq2z\" (UID: \"cce0711c-ff56-4335-b244-17f0180eb4d4\") " pod="kube-system/kindnet-6gq2z"
	Dec 10 06:14:15 embed-certs-028500 kubelet[2339]: I1210 06:14:15.822739    2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0b34d810-7015-47ad-98a2-41d80c02a77e-kube-proxy\") pod \"kube-proxy-sr7kh\" (UID: \"0b34d810-7015-47ad-98a2-41d80c02a77e\") " pod="kube-system/kube-proxy-sr7kh"
	Dec 10 06:14:15 embed-certs-028500 kubelet[2339]: I1210 06:14:15.822764    2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/cce0711c-ff56-4335-b244-17f0180eb4d4-cni-cfg\") pod \"kindnet-6gq2z\" (UID: \"cce0711c-ff56-4335-b244-17f0180eb4d4\") " pod="kube-system/kindnet-6gq2z"
	Dec 10 06:14:15 embed-certs-028500 kubelet[2339]: I1210 06:14:15.822784    2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cce0711c-ff56-4335-b244-17f0180eb4d4-xtables-lock\") pod \"kindnet-6gq2z\" (UID: \"cce0711c-ff56-4335-b244-17f0180eb4d4\") " pod="kube-system/kindnet-6gq2z"
	Dec 10 06:14:15 embed-certs-028500 kubelet[2339]: I1210 06:14:15.822806    2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr67v\" (UniqueName: \"kubernetes.io/projected/cce0711c-ff56-4335-b244-17f0180eb4d4-kube-api-access-rr67v\") pod \"kindnet-6gq2z\" (UID: \"cce0711c-ff56-4335-b244-17f0180eb4d4\") " pod="kube-system/kindnet-6gq2z"
	Dec 10 06:14:15 embed-certs-028500 kubelet[2339]: I1210 06:14:15.822833    2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b34d810-7015-47ad-98a2-41d80c02a77e-lib-modules\") pod \"kube-proxy-sr7kh\" (UID: \"0b34d810-7015-47ad-98a2-41d80c02a77e\") " pod="kube-system/kube-proxy-sr7kh"
	Dec 10 06:14:17 embed-certs-028500 kubelet[2339]: I1210 06:14:17.899995    2339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sr7kh" podStartSLOduration=2.899972904 podStartE2EDuration="2.899972904s" podCreationTimestamp="2025-12-10 06:14:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:14:16.176401418 +0000 UTC m=+6.158771186" watchObservedRunningTime="2025-12-10 06:14:17.899972904 +0000 UTC m=+7.882342671"
	Dec 10 06:14:22 embed-certs-028500 kubelet[2339]: I1210 06:14:22.034641    2339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-6gq2z" podStartSLOduration=4.919468608 podStartE2EDuration="7.03462204s" podCreationTimestamp="2025-12-10 06:14:15 +0000 UTC" firstStartedPulling="2025-12-10 06:14:16.044059803 +0000 UTC m=+6.026429562" lastFinishedPulling="2025-12-10 06:14:18.159213233 +0000 UTC m=+8.141582994" observedRunningTime="2025-12-10 06:14:19.175555964 +0000 UTC m=+9.157925731" watchObservedRunningTime="2025-12-10 06:14:22.03462204 +0000 UTC m=+12.016991811"
	Dec 10 06:14:29 embed-certs-028500 kubelet[2339]: I1210 06:14:29.181828    2339 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 10 06:14:29 embed-certs-028500 kubelet[2339]: I1210 06:14:29.314555    2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62gvg\" (UniqueName: \"kubernetes.io/projected/7ad22b4a-5d1a-403a-a57e-69745116eb0c-kube-api-access-62gvg\") pod \"coredns-66bc5c9577-8xwfc\" (UID: \"7ad22b4a-5d1a-403a-a57e-69745116eb0c\") " pod="kube-system/coredns-66bc5c9577-8xwfc"
	Dec 10 06:14:29 embed-certs-028500 kubelet[2339]: I1210 06:14:29.314618    2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhhsf\" (UniqueName: \"kubernetes.io/projected/c6fe10b9-7d0d-4911-afc6-65b935770c41-kube-api-access-bhhsf\") pod \"storage-provisioner\" (UID: \"c6fe10b9-7d0d-4911-afc6-65b935770c41\") " pod="kube-system/storage-provisioner"
	Dec 10 06:14:29 embed-certs-028500 kubelet[2339]: I1210 06:14:29.314727    2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ad22b4a-5d1a-403a-a57e-69745116eb0c-config-volume\") pod \"coredns-66bc5c9577-8xwfc\" (UID: \"7ad22b4a-5d1a-403a-a57e-69745116eb0c\") " pod="kube-system/coredns-66bc5c9577-8xwfc"
	Dec 10 06:14:29 embed-certs-028500 kubelet[2339]: I1210 06:14:29.314815    2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c6fe10b9-7d0d-4911-afc6-65b935770c41-tmp\") pod \"storage-provisioner\" (UID: \"c6fe10b9-7d0d-4911-afc6-65b935770c41\") " pod="kube-system/storage-provisioner"
	Dec 10 06:14:30 embed-certs-028500 kubelet[2339]: I1210 06:14:30.187120    2339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8xwfc" podStartSLOduration=15.187073803 podStartE2EDuration="15.187073803s" podCreationTimestamp="2025-12-10 06:14:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:14:30.186883886 +0000 UTC m=+20.169253659" watchObservedRunningTime="2025-12-10 06:14:30.187073803 +0000 UTC m=+20.169443570"
	Dec 10 06:14:30 embed-certs-028500 kubelet[2339]: I1210 06:14:30.340915    2339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.340889461 podStartE2EDuration="15.340889461s" podCreationTimestamp="2025-12-10 06:14:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:14:30.269632133 +0000 UTC m=+20.252001900" watchObservedRunningTime="2025-12-10 06:14:30.340889461 +0000 UTC m=+20.323259229"
	Dec 10 06:14:32 embed-certs-028500 kubelet[2339]: I1210 06:14:32.232726    2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxjdh\" (UniqueName: \"kubernetes.io/projected/e898aa0d-3dca-4ee0-8728-aca196c5331d-kube-api-access-nxjdh\") pod \"busybox\" (UID: \"e898aa0d-3dca-4ee0-8728-aca196c5331d\") " pod="default/busybox"
	Dec 10 06:14:38 embed-certs-028500 kubelet[2339]: E1210 06:14:38.300499    2339 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:39452->127.0.0.1:41843: write tcp 127.0.0.1:39452->127.0.0.1:41843: write: broken pipe
	
	
	==> storage-provisioner [797a9a79e64c5d4adf1d46f6e2e81677ad40bd783ca9a5ef2bc4481e0a191342] <==
	I1210 06:14:29.570264       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 06:14:29.579494       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 06:14:29.579549       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1210 06:14:29.582254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:29.592988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:14:29.593223       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 06:14:29.593760       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-028500_0390a483-8939-40b6-b86f-1b8781d32ef4!
	I1210 06:14:29.593619       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9f1f4ad0-e1fa-4611-8756-9fd0b611cf54", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-028500_0390a483-8939-40b6-b86f-1b8781d32ef4 became leader
	W1210 06:14:29.597828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:29.602204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:14:29.694202       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-028500_0390a483-8939-40b6-b86f-1b8781d32ef4!
	W1210 06:14:31.605258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:31.609160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:33.612455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:33.617388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:35.621274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:35.625944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:37.628952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:37.632824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:39.636176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:39.639817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-028500 -n embed-certs-028500
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-028500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-725426 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-725426 --alsologtostderr -v=1: exit status 80 (1.915088331s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-725426 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:14:47.327673  385546 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:14:47.327773  385546 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:14:47.327782  385546 out.go:374] Setting ErrFile to fd 2...
	I1210 06:14:47.327786  385546 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:14:47.327967  385546 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 06:14:47.328200  385546 out.go:368] Setting JSON to false
	I1210 06:14:47.328217  385546 mustload.go:66] Loading cluster: old-k8s-version-725426
	I1210 06:14:47.328554  385546 config.go:182] Loaded profile config "old-k8s-version-725426": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1210 06:14:47.328906  385546 cli_runner.go:164] Run: docker container inspect old-k8s-version-725426 --format={{.State.Status}}
	I1210 06:14:47.347142  385546 host.go:66] Checking if "old-k8s-version-725426" exists ...
	I1210 06:14:47.347393  385546 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:14:47.404615  385546 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:94 OomKillDisable:false NGoroutines:96 SystemTime:2025-12-10 06:14:47.393583154 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:14:47.405287  385546 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765151505-21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765151505-21409-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-725426 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1210 06:14:47.406641  385546 out.go:179] * Pausing node old-k8s-version-725426 ... 
	I1210 06:14:47.407706  385546 host.go:66] Checking if "old-k8s-version-725426" exists ...
	I1210 06:14:47.407979  385546 ssh_runner.go:195] Run: systemctl --version
	I1210 06:14:47.408031  385546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-725426
	I1210 06:14:47.426136  385546 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/old-k8s-version-725426/id_rsa Username:docker}
	I1210 06:14:47.525822  385546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:14:47.537529  385546 pause.go:52] kubelet running: true
	I1210 06:14:47.537583  385546 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:14:47.702741  385546 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:14:47.702833  385546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:14:47.775317  385546 cri.go:89] found id: "39a2f71da71fe7c1de3850b3a9c51ed384745691f28a2032e402aa21b568b232"
	I1210 06:14:47.775338  385546 cri.go:89] found id: "3ec52d6fd6de5971b9c5c66dabd9b83677b9c3bb23ccec3b078f383aee8c9fbe"
	I1210 06:14:47.775345  385546 cri.go:89] found id: "5d02e309047c074c7ea66ed67d4e89f47b261b4f75b4e00eb2cb3070da54fe1c"
	I1210 06:14:47.775349  385546 cri.go:89] found id: "9e0c7c4b5d6256e465920477a0bbc62ffede25d7f1093bad071e1332673719ed"
	I1210 06:14:47.775354  385546 cri.go:89] found id: "5990f9b53cfb7195fb05941363f517e271964705fb893f41bcded21a1b4fc06e"
	I1210 06:14:47.775359  385546 cri.go:89] found id: "217c2500f89f71d1324ffbf4ed5b1db6ba6968887bda00d70e62b6c6b61b2d9c"
	I1210 06:14:47.775363  385546 cri.go:89] found id: "157dd67e0dd14a9973e3a0ca206bd7d0544b492de0e9c6fb754a5a6046365641"
	I1210 06:14:47.775367  385546 cri.go:89] found id: "8f5281037f2c44ec8cb539eef6c1a25935bd970e552a1b7795809e847a03d5ca"
	I1210 06:14:47.775376  385546 cri.go:89] found id: "e4a03ac2f7438f6d74706fce0fe8f58a58512a1dac9e6fcff2e15b6523469282"
	I1210 06:14:47.775385  385546 cri.go:89] found id: "63f93756ec3e560abf24b3a96a103ac572af56959a7e1630a2df5c20bfe381d7"
	I1210 06:14:47.775394  385546 cri.go:89] found id: "0b17c77ccaaf0facf4210e4593ca7afa37dcce6d104183e98e0ff909ab0e54f1"
	I1210 06:14:47.775399  385546 cri.go:89] found id: ""
	I1210 06:14:47.775452  385546 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:14:47.786909  385546 retry.go:31] will retry after 292.596149ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:14:47Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:14:48.080186  385546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:14:48.092913  385546 pause.go:52] kubelet running: false
	I1210 06:14:48.092966  385546 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:14:48.238654  385546 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:14:48.238721  385546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:14:48.306540  385546 cri.go:89] found id: "39a2f71da71fe7c1de3850b3a9c51ed384745691f28a2032e402aa21b568b232"
	I1210 06:14:48.306562  385546 cri.go:89] found id: "3ec52d6fd6de5971b9c5c66dabd9b83677b9c3bb23ccec3b078f383aee8c9fbe"
	I1210 06:14:48.306568  385546 cri.go:89] found id: "5d02e309047c074c7ea66ed67d4e89f47b261b4f75b4e00eb2cb3070da54fe1c"
	I1210 06:14:48.306573  385546 cri.go:89] found id: "9e0c7c4b5d6256e465920477a0bbc62ffede25d7f1093bad071e1332673719ed"
	I1210 06:14:48.306577  385546 cri.go:89] found id: "5990f9b53cfb7195fb05941363f517e271964705fb893f41bcded21a1b4fc06e"
	I1210 06:14:48.306582  385546 cri.go:89] found id: "217c2500f89f71d1324ffbf4ed5b1db6ba6968887bda00d70e62b6c6b61b2d9c"
	I1210 06:14:48.306586  385546 cri.go:89] found id: "157dd67e0dd14a9973e3a0ca206bd7d0544b492de0e9c6fb754a5a6046365641"
	I1210 06:14:48.306590  385546 cri.go:89] found id: "8f5281037f2c44ec8cb539eef6c1a25935bd970e552a1b7795809e847a03d5ca"
	I1210 06:14:48.306595  385546 cri.go:89] found id: "e4a03ac2f7438f6d74706fce0fe8f58a58512a1dac9e6fcff2e15b6523469282"
	I1210 06:14:48.306607  385546 cri.go:89] found id: "63f93756ec3e560abf24b3a96a103ac572af56959a7e1630a2df5c20bfe381d7"
	I1210 06:14:48.306617  385546 cri.go:89] found id: "0b17c77ccaaf0facf4210e4593ca7afa37dcce6d104183e98e0ff909ab0e54f1"
	I1210 06:14:48.306621  385546 cri.go:89] found id: ""
	I1210 06:14:48.306666  385546 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:14:48.319161  385546 retry.go:31] will retry after 483.533615ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:14:48Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:14:48.802817  385546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:14:48.819463  385546 pause.go:52] kubelet running: false
	I1210 06:14:48.819514  385546 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:14:49.042000  385546 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:14:49.042093  385546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:14:49.145399  385546 cri.go:89] found id: "39a2f71da71fe7c1de3850b3a9c51ed384745691f28a2032e402aa21b568b232"
	I1210 06:14:49.145424  385546 cri.go:89] found id: "3ec52d6fd6de5971b9c5c66dabd9b83677b9c3bb23ccec3b078f383aee8c9fbe"
	I1210 06:14:49.145431  385546 cri.go:89] found id: "5d02e309047c074c7ea66ed67d4e89f47b261b4f75b4e00eb2cb3070da54fe1c"
	I1210 06:14:49.145435  385546 cri.go:89] found id: "9e0c7c4b5d6256e465920477a0bbc62ffede25d7f1093bad071e1332673719ed"
	I1210 06:14:49.145440  385546 cri.go:89] found id: "5990f9b53cfb7195fb05941363f517e271964705fb893f41bcded21a1b4fc06e"
	I1210 06:14:49.145448  385546 cri.go:89] found id: "217c2500f89f71d1324ffbf4ed5b1db6ba6968887bda00d70e62b6c6b61b2d9c"
	I1210 06:14:49.145452  385546 cri.go:89] found id: "157dd67e0dd14a9973e3a0ca206bd7d0544b492de0e9c6fb754a5a6046365641"
	I1210 06:14:49.145456  385546 cri.go:89] found id: "8f5281037f2c44ec8cb539eef6c1a25935bd970e552a1b7795809e847a03d5ca"
	I1210 06:14:49.145461  385546 cri.go:89] found id: "e4a03ac2f7438f6d74706fce0fe8f58a58512a1dac9e6fcff2e15b6523469282"
	I1210 06:14:49.145485  385546 cri.go:89] found id: "63f93756ec3e560abf24b3a96a103ac572af56959a7e1630a2df5c20bfe381d7"
	I1210 06:14:49.145490  385546 cri.go:89] found id: "0b17c77ccaaf0facf4210e4593ca7afa37dcce6d104183e98e0ff909ab0e54f1"
	I1210 06:14:49.145495  385546 cri.go:89] found id: ""
	I1210 06:14:49.145549  385546 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:14:49.165869  385546 out.go:203] 
	W1210 06:14:49.167513  385546 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:14:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:14:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 06:14:49.167762  385546 out.go:285] * 
	* 
	W1210 06:14:49.174983  385546 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:14:49.176336  385546 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-725426 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-725426
helpers_test.go:244: (dbg) docker inspect old-k8s-version-725426:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "565a7417ad854a7ed35617a365dca829fa25f8d3be3eaf17a40b74828ab57ef1",
	        "Created": "2025-12-10T06:12:38.650542481Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 369369,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:13:49.73005669Z",
	            "FinishedAt": "2025-12-10T06:13:48.47232493Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/565a7417ad854a7ed35617a365dca829fa25f8d3be3eaf17a40b74828ab57ef1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/565a7417ad854a7ed35617a365dca829fa25f8d3be3eaf17a40b74828ab57ef1/hostname",
	        "HostsPath": "/var/lib/docker/containers/565a7417ad854a7ed35617a365dca829fa25f8d3be3eaf17a40b74828ab57ef1/hosts",
	        "LogPath": "/var/lib/docker/containers/565a7417ad854a7ed35617a365dca829fa25f8d3be3eaf17a40b74828ab57ef1/565a7417ad854a7ed35617a365dca829fa25f8d3be3eaf17a40b74828ab57ef1-json.log",
	        "Name": "/old-k8s-version-725426",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-725426:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-725426",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "565a7417ad854a7ed35617a365dca829fa25f8d3be3eaf17a40b74828ab57ef1",
	                "LowerDir": "/var/lib/docker/overlay2/7b40167ff82cc93446db6a2604157bc0e7ce9a2383a73efb4fa6f25634d0e151-init/diff:/var/lib/docker/overlay2/b62e2f8db4877fd6b32453256d2aeab173581bfdfbed6c87a5c3b6dd49dbb983/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7b40167ff82cc93446db6a2604157bc0e7ce9a2383a73efb4fa6f25634d0e151/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7b40167ff82cc93446db6a2604157bc0e7ce9a2383a73efb4fa6f25634d0e151/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7b40167ff82cc93446db6a2604157bc0e7ce9a2383a73efb4fa6f25634d0e151/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-725426",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-725426/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-725426",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-725426",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-725426",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f7730463a17a565f4431df425a31cf56484f395d7d2e36babbbd4476b8a2a44e",
	            "SandboxKey": "/var/run/docker/netns/f7730463a17a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-725426": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b1ead66c643ddb232e3817c16b9e356f55b33d7d7d004331db07c60da2882eda",
	                    "EndpointID": "c72408290572b6219441ca27b4522c9778eeade7f89c4aeebebe39b3ab507efd",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "1a:26:13:06:87:ab",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-725426",
	                        "565a7417ad85"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-725426 -n old-k8s-version-725426
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-725426 -n old-k8s-version-725426: exit status 2 (462.794199ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-725426 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-725426 logs -n 25: (1.298353539s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-094798 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ ssh     │ -p bridge-094798 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ ssh     │ -p bridge-094798 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo cri-dockerd --version                                                                                                                              │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ ssh     │ -p bridge-094798 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo containerd config dump                                                                                                                             │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo crio config                                                                                                                                        │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ delete  │ -p bridge-094798                                                                                                                                                         │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ delete  │ -p disable-driver-mounts-569732                                                                                                                                          │ disable-driver-mounts-569732 │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ start   │ -p default-k8s-diff-port-125336 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3 │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-468539 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ stop    │ -p no-preload-468539 --alsologtostderr -v=3                                                                                                                              │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ addons  │ enable metrics-server -p embed-certs-028500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ stop    │ -p embed-certs-028500 --alsologtostderr -v=3                                                                                                                             │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-468539 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ start   │ -p no-preload-468539 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1             │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ image   │ old-k8s-version-725426 image list --format=json                                                                                                                          │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ pause   │ -p old-k8s-version-725426 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:14:41
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:14:41.429421  383776 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:14:41.429546  383776 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:14:41.429554  383776 out.go:374] Setting ErrFile to fd 2...
	I1210 06:14:41.429561  383776 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:14:41.429777  383776 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 06:14:41.430296  383776 out.go:368] Setting JSON to false
	I1210 06:14:41.431768  383776 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3425,"bootTime":1765343856,"procs":414,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:14:41.431828  383776 start.go:143] virtualization: kvm guest
	I1210 06:14:41.433571  383776 out.go:179] * [no-preload-468539] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:14:41.435013  383776 notify.go:221] Checking for updates...
	I1210 06:14:41.435020  383776 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:14:41.436189  383776 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:14:41.437154  383776 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:14:41.438182  383776 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube
	I1210 06:14:41.439229  383776 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:14:41.440146  383776 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:14:41.441583  383776 config.go:182] Loaded profile config "no-preload-468539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:14:41.442044  383776 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:14:41.468381  383776 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:14:41.468539  383776 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:14:41.538032  383776 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:83 SystemTime:2025-12-10 06:14:41.521868374 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:14:41.538216  383776 docker.go:319] overlay module found
	I1210 06:14:41.541186  383776 out.go:179] * Using the docker driver based on existing profile
	I1210 06:14:41.542353  383776 start.go:309] selected driver: docker
	I1210 06:14:41.542374  383776 start.go:927] validating driver "docker" against &{Name:no-preload-468539 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-468539 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:14:41.542492  383776 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:14:41.543268  383776 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:14:41.609006  383776 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:83 SystemTime:2025-12-10 06:14:41.599637973 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:14:41.609289  383776 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:14:41.609322  383776 cni.go:84] Creating CNI manager for ""
	I1210 06:14:41.609384  383776 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:14:41.609422  383776 start.go:353] cluster config:
	{Name:no-preload-468539 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-468539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:14:41.610974  383776 out.go:179] * Starting "no-preload-468539" primary control-plane node in "no-preload-468539" cluster
	I1210 06:14:41.611886  383776 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:14:41.612925  383776 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:14:41.613819  383776 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1210 06:14:41.613905  383776 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 06:14:41.613911  383776 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/no-preload-468539/config.json ...
	I1210 06:14:41.614072  383776 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 06:14:41.637278  383776 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:14:41.637296  383776 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 06:14:41.637311  383776 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:14:41.637337  383776 start.go:360] acquireMachinesLock for no-preload-468539: {Name:mkf25110bcf822b894cb65642adeaf2352263d1e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:41.637394  383776 start.go:364] duration metric: took 34.884µs to acquireMachinesLock for "no-preload-468539"
	I1210 06:14:41.637410  383776 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:14:41.637415  383776 fix.go:54] fixHost starting: 
	I1210 06:14:41.637602  383776 cli_runner.go:164] Run: docker container inspect no-preload-468539 --format={{.State.Status}}
	I1210 06:14:41.655563  383776 fix.go:112] recreateIfNeeded on no-preload-468539: state=Stopped err=<nil>
	W1210 06:14:41.655587  383776 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:14:42.312146  377144 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1210 06:14:42.312215  377144 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:14:42.312341  377144 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:14:42.312414  377144 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1210 06:14:42.312466  377144 kubeadm.go:319] OS: Linux
	I1210 06:14:42.312567  377144 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:14:42.312647  377144 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:14:42.312728  377144 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:14:42.312802  377144 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:14:42.312868  377144 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:14:42.312932  377144 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:14:42.313004  377144 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:14:42.313065  377144 kubeadm.go:319] CGROUPS_IO: enabled
	I1210 06:14:42.313192  377144 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:14:42.313360  377144 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:14:42.313479  377144 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:14:42.313582  377144 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:14:42.315318  377144 out.go:252]   - Generating certificates and keys ...
	I1210 06:14:42.315416  377144 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:14:42.315491  377144 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:14:42.315563  377144 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 06:14:42.315647  377144 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 06:14:42.315735  377144 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 06:14:42.315805  377144 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 06:14:42.315889  377144 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 06:14:42.316024  377144 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-125336 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1210 06:14:42.316075  377144 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 06:14:42.316246  377144 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-125336 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1210 06:14:42.316361  377144 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 06:14:42.316432  377144 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 06:14:42.316473  377144 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 06:14:42.316527  377144 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:14:42.316572  377144 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:14:42.316622  377144 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:14:42.316667  377144 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:14:42.316728  377144 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:14:42.316785  377144 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:14:42.316863  377144 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:14:42.316924  377144 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:14:42.318037  377144 out.go:252]   - Booting up control plane ...
	I1210 06:14:42.318130  377144 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:14:42.318197  377144 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:14:42.318258  377144 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:14:42.318349  377144 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:14:42.318427  377144 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:14:42.318540  377144 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:14:42.318655  377144 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:14:42.318727  377144 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:14:42.318886  377144 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:14:42.318977  377144 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:14:42.319031  377144 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001681918s
	I1210 06:14:42.319181  377144 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 06:14:42.319309  377144 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8444/livez
	I1210 06:14:42.319444  377144 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 06:14:42.319565  377144 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 06:14:42.319675  377144 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.196157281s
	I1210 06:14:42.319787  377144 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.37391519s
	I1210 06:14:42.319871  377144 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001533642s
	I1210 06:14:42.319969  377144 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 06:14:42.320089  377144 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 06:14:42.320148  377144 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 06:14:42.320392  377144 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-125336 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 06:14:42.320447  377144 kubeadm.go:319] [bootstrap-token] Using token: hzyua2.uklciv6onhfd51v4
	I1210 06:14:42.322274  377144 out.go:252]   - Configuring RBAC rules ...
	I1210 06:14:42.322368  377144 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 06:14:42.322449  377144 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 06:14:42.322572  377144 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 06:14:42.322703  377144 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 06:14:42.322807  377144 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 06:14:42.322888  377144 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 06:14:42.322987  377144 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 06:14:42.323025  377144 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 06:14:42.323068  377144 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 06:14:42.323074  377144 kubeadm.go:319] 
	I1210 06:14:42.323183  377144 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 06:14:42.323193  377144 kubeadm.go:319] 
	I1210 06:14:42.323265  377144 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 06:14:42.323271  377144 kubeadm.go:319] 
	I1210 06:14:42.323292  377144 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 06:14:42.323346  377144 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 06:14:42.323390  377144 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 06:14:42.323395  377144 kubeadm.go:319] 
	I1210 06:14:42.323447  377144 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 06:14:42.323452  377144 kubeadm.go:319] 
	I1210 06:14:42.323492  377144 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 06:14:42.323498  377144 kubeadm.go:319] 
	I1210 06:14:42.323543  377144 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 06:14:42.323615  377144 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 06:14:42.323685  377144 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 06:14:42.323691  377144 kubeadm.go:319] 
	I1210 06:14:42.323765  377144 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 06:14:42.323830  377144 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 06:14:42.323839  377144 kubeadm.go:319] 
	I1210 06:14:42.323933  377144 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token hzyua2.uklciv6onhfd51v4 \
	I1210 06:14:42.324025  377144 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fec42d2a7c02894c4f889fb8bc31e98283f3b1a3e3609cf9160b0c24109717cc \
	I1210 06:14:42.324051  377144 kubeadm.go:319] 	--control-plane 
	I1210 06:14:42.324058  377144 kubeadm.go:319] 
	I1210 06:14:42.324154  377144 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 06:14:42.324162  377144 kubeadm.go:319] 
	I1210 06:14:42.324284  377144 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token hzyua2.uklciv6onhfd51v4 \
	I1210 06:14:42.324448  377144 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fec42d2a7c02894c4f889fb8bc31e98283f3b1a3e3609cf9160b0c24109717cc 
	I1210 06:14:42.324462  377144 cni.go:84] Creating CNI manager for ""
	I1210 06:14:42.324468  377144 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:14:42.325653  377144 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1210 06:14:41.657022  383776 out.go:252] * Restarting existing docker container for "no-preload-468539" ...
	I1210 06:14:41.657090  383776 cli_runner.go:164] Run: docker start no-preload-468539
	I1210 06:14:41.777604  383776 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 06:14:41.938780  383776 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 06:14:41.954432  383776 cli_runner.go:164] Run: docker container inspect no-preload-468539 --format={{.State.Status}}
	I1210 06:14:41.979311  383776 kic.go:430] container "no-preload-468539" state is running.
	I1210 06:14:41.979737  383776 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-468539
	I1210 06:14:42.003009  383776 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/no-preload-468539/config.json ...
	I1210 06:14:42.003291  383776 machine.go:94] provisionDockerMachine start ...
	I1210 06:14:42.003391  383776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-468539
	I1210 06:14:42.024424  383776 main.go:143] libmachine: Using SSH client type: native
	I1210 06:14:42.024731  383776 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1210 06:14:42.024749  383776 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:14:42.025356  383776 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58078->127.0.0.1:33118: read: connection reset by peer
	I1210 06:14:42.094960  383776 cache.go:107] acquiring lock: {Name:mk0763a50664c56b0862900e71862307cba94d4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:42.095067  383776 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:14:42.095108  383776 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 139.852µs
	I1210 06:14:42.095130  383776 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:14:42.095135  383776 cache.go:107] acquiring lock: {Name:mk1e61937bbcbe456972ee92ce51441d0a310af5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:42.095177  383776 cache.go:107] acquiring lock: {Name:mk615200abc7eac862a5e41cd77ae4b62bf451cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:42.095215  383776 cache.go:107] acquiring lock: {Name:mkfaee1dcd6a6f37ecb9d19fcd839a5a6d9b20e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:42.095234  383776 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 06:14:42.095244  383776 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0" took 67.478µs
	I1210 06:14:42.095254  383776 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 06:14:42.095264  383776 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 06:14:42.095279  383776 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 66.397µs
	I1210 06:14:42.095290  383776 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 06:14:42.095272  383776 cache.go:107] acquiring lock: {Name:mk76394a7d1abe4be60a9e73a4b33f52c38d5e6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:42.095296  383776 cache.go:107] acquiring lock: {Name:mke4d7efb2ee4879b97924080e0d429a33c1d765 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:42.095321  383776 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 06:14:42.095329  383776 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 60.897µs
	I1210 06:14:42.095337  383776 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 06:14:42.095150  383776 cache.go:107] acquiring lock: {Name:mkd670cede0997c7eb0e9bd388a82e1cb2741031 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:42.095345  383776 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 06:14:42.095365  383776 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:14:42.095368  383776 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 60.424µs
	I1210 06:14:42.095386  383776 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 06:14:42.095372  383776 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 225.92µs
	I1210 06:14:42.095397  383776 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:14:42.095388  383776 cache.go:107] acquiring lock: {Name:mk1df93d14c27f679df68c721474a110ecfc043b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:42.095417  383776 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 06:14:42.095425  383776 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 06:14:42.095425  383776 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 306.392µs
	I1210 06:14:42.095432  383776 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 06:14:42.095432  383776 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 46.891µs
	I1210 06:14:42.095441  383776 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 06:14:42.095449  383776 cache.go:87] Successfully saved all images to host disk.
	I1210 06:14:45.158647  383776 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-468539
	
	I1210 06:14:45.158675  383776 ubuntu.go:182] provisioning hostname "no-preload-468539"
	I1210 06:14:45.158744  383776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-468539
	I1210 06:14:45.179657  383776 main.go:143] libmachine: Using SSH client type: native
	I1210 06:14:45.179937  383776 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1210 06:14:45.179959  383776 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-468539 && echo "no-preload-468539" | sudo tee /etc/hostname
	I1210 06:14:45.322545  383776 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-468539
	
	I1210 06:14:45.322637  383776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-468539
	I1210 06:14:45.340740  383776 main.go:143] libmachine: Using SSH client type: native
	I1210 06:14:45.340968  383776 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1210 06:14:45.340984  383776 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-468539' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-468539/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-468539' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:14:45.471299  383776 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:14:45.471328  383776 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-5725/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-5725/.minikube}
	I1210 06:14:45.471359  383776 ubuntu.go:190] setting up certificates
	I1210 06:14:45.471371  383776 provision.go:84] configureAuth start
	I1210 06:14:45.471428  383776 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-468539
	I1210 06:14:45.489566  383776 provision.go:143] copyHostCerts
	I1210 06:14:45.489628  383776 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem, removing ...
	I1210 06:14:45.489641  383776 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem
	I1210 06:14:45.489723  383776 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem (1078 bytes)
	I1210 06:14:45.489849  383776 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem, removing ...
	I1210 06:14:45.489862  383776 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem
	I1210 06:14:45.489904  383776 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem (1123 bytes)
	I1210 06:14:45.490010  383776 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem, removing ...
	I1210 06:14:45.490021  383776 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem
	I1210 06:14:45.490063  383776 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem (1679 bytes)
	I1210 06:14:45.490171  383776 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem org=jenkins.no-preload-468539 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-468539]
	I1210 06:14:45.606504  383776 provision.go:177] copyRemoteCerts
	I1210 06:14:45.606572  383776 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:14:45.606613  383776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-468539
	I1210 06:14:45.624205  383776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/no-preload-468539/id_rsa Username:docker}
	I1210 06:14:45.719928  383776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:14:45.737064  383776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:14:45.753641  383776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:14:45.770537  383776 provision.go:87] duration metric: took 299.15054ms to configureAuth
	I1210 06:14:45.770560  383776 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:14:45.770722  383776 config.go:182] Loaded profile config "no-preload-468539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:14:45.770826  383776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-468539
	I1210 06:14:45.788984  383776 main.go:143] libmachine: Using SSH client type: native
	I1210 06:14:45.789241  383776 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1210 06:14:45.789268  383776 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:14:46.109572  383776 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:14:46.109617  383776 machine.go:97] duration metric: took 4.106285489s to provisionDockerMachine
	I1210 06:14:46.109629  383776 start.go:293] postStartSetup for "no-preload-468539" (driver="docker")
	I1210 06:14:46.109645  383776 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:14:46.109712  383776 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:14:46.109770  383776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-468539
	I1210 06:14:46.131467  383776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/no-preload-468539/id_rsa Username:docker}
	I1210 06:14:46.239256  383776 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:14:46.243540  383776 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:14:46.243570  383776 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:14:46.243582  383776 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/addons for local assets ...
	I1210 06:14:46.243651  383776 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/files for local assets ...
	I1210 06:14:46.243855  383776 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem -> 92532.pem in /etc/ssl/certs
	I1210 06:14:46.243991  383776 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:14:46.252467  383776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:14:46.271497  383776 start.go:296] duration metric: took 161.856057ms for postStartSetup
	I1210 06:14:46.271571  383776 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:14:46.271605  383776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-468539
	I1210 06:14:46.290735  383776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/no-preload-468539/id_rsa Username:docker}
	I1210 06:14:46.385096  383776 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:14:46.389534  383776 fix.go:56] duration metric: took 4.752113452s for fixHost
	I1210 06:14:46.389559  383776 start.go:83] releasing machines lock for "no-preload-468539", held for 4.752153053s
	I1210 06:14:46.389624  383776 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-468539
	I1210 06:14:46.407510  383776 ssh_runner.go:195] Run: cat /version.json
	I1210 06:14:46.407554  383776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-468539
	I1210 06:14:46.407603  383776 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:14:46.407679  383776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-468539
	I1210 06:14:46.424089  383776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/no-preload-468539/id_rsa Username:docker}
	I1210 06:14:46.425923  383776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/no-preload-468539/id_rsa Username:docker}
	I1210 06:14:42.326668  377144 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 06:14:42.331553  377144 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1210 06:14:42.331570  377144 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1210 06:14:42.344373  377144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 06:14:42.552457  377144 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 06:14:42.552512  377144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:42.552552  377144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-125336 minikube.k8s.io/updated_at=2025_12_10T06_14_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602 minikube.k8s.io/name=default-k8s-diff-port-125336 minikube.k8s.io/primary=true
	I1210 06:14:42.642090  377144 ops.go:34] apiserver oom_adj: -16
	I1210 06:14:42.642228  377144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:43.142730  377144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:43.642581  377144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:44.143046  377144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:44.642498  377144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:45.142314  377144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:45.643194  377144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:46.143274  377144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:46.643151  377144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:46.716894  377144 kubeadm.go:1114] duration metric: took 4.164449326s to wait for elevateKubeSystemPrivileges
	I1210 06:14:46.716929  377144 kubeadm.go:403] duration metric: took 15.358070049s to StartCluster
	I1210 06:14:46.716950  377144 settings.go:142] acquiring lock: {Name:mk8c38e27b37253ca8cb2a2adf6342f0db270902 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:14:46.717021  377144 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:14:46.718697  377144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/kubeconfig: {Name:mkfa60e97179780abb80cddf99075aed36301884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:14:46.719005  377144 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 06:14:46.719006  377144 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:14:46.719135  377144 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:14:46.719207  377144 config.go:182] Loaded profile config "default-k8s-diff-port-125336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:14:46.719237  377144 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-125336"
	I1210 06:14:46.719260  377144 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-125336"
	I1210 06:14:46.719287  377144 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-125336"
	I1210 06:14:46.719264  377144 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-125336"
	I1210 06:14:46.719445  377144 host.go:66] Checking if "default-k8s-diff-port-125336" exists ...
	I1210 06:14:46.719622  377144 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:14:46.719874  377144 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:14:46.720494  377144 out.go:179] * Verifying Kubernetes components...
	I1210 06:14:46.721575  377144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:14:46.743912  377144 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:14:46.744934  377144 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-125336"
	I1210 06:14:46.744979  377144 host.go:66] Checking if "default-k8s-diff-port-125336" exists ...
	I1210 06:14:46.745235  377144 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:14:46.745255  377144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:14:46.745310  377144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:14:46.745504  377144 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:14:46.774301  377144 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:14:46.774330  377144 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:14:46.774403  377144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:14:46.775233  377144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:14:46.797371  377144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:14:46.810338  377144 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 06:14:46.876372  377144 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:14:46.901986  377144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:14:46.911068  377144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:14:47.041432  377144 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1210 06:14:47.043208  377144 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-125336" to be "Ready" ...
	I1210 06:14:47.276405  377144 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1210 06:14:46.516151  383776 ssh_runner.go:195] Run: systemctl --version
	I1210 06:14:46.570658  383776 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:14:46.606971  383776 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:14:46.611525  383776 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:14:46.611576  383776 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:14:46.619317  383776 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:14:46.619338  383776 start.go:496] detecting cgroup driver to use...
	I1210 06:14:46.619368  383776 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:14:46.619403  383776 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:14:46.633289  383776 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:14:46.644582  383776 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:14:46.644630  383776 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:14:46.659173  383776 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:14:46.671613  383776 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:14:46.783157  383776 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:14:46.898880  383776 docker.go:234] disabling docker service ...
	I1210 06:14:46.898946  383776 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:14:46.918937  383776 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:14:46.935202  383776 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:14:47.065480  383776 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:14:47.182444  383776 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:14:47.196804  383776 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:14:47.211955  383776 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 06:14:47.377561  383776 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:14:47.377624  383776 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:14:47.388610  383776 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:14:47.388674  383776 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:14:47.397833  383776 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:14:47.408653  383776 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:14:47.418145  383776 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:14:47.426825  383776 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:14:47.436299  383776 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:14:47.446450  383776 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:14:47.457001  383776 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:14:47.466277  383776 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:14:47.475171  383776 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:14:47.559395  383776 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:14:47.705550  383776 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:14:47.705616  383776 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:14:47.709537  383776 start.go:564] Will wait 60s for crictl version
	I1210 06:14:47.709596  383776 ssh_runner.go:195] Run: which crictl
	I1210 06:14:47.713056  383776 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:14:47.741912  383776 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:14:47.741973  383776 ssh_runner.go:195] Run: crio --version
	I1210 06:14:47.772416  383776 ssh_runner.go:195] Run: crio --version
	I1210 06:14:47.801886  383776 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1210 06:14:47.802905  383776 cli_runner.go:164] Run: docker network inspect no-preload-468539 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:14:47.820157  383776 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1210 06:14:47.823949  383776 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:14:47.834274  383776 kubeadm.go:884] updating cluster {Name:no-preload-468539 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-468539 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:14:47.834469  383776 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 06:14:47.992158  383776 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 06:14:48.135001  383776 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 06:14:48.268202  383776 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1210 06:14:48.268255  383776 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:14:48.302310  383776 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:14:48.302333  383776 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:14:48.302344  383776 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-rc.1 crio true true} ...
	I1210 06:14:48.302474  383776 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-468539 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-468539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:14:48.302562  383776 ssh_runner.go:195] Run: crio config
	I1210 06:14:48.350151  383776 cni.go:84] Creating CNI manager for ""
	I1210 06:14:48.350172  383776 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:14:48.350186  383776 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:14:48.350208  383776 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-468539 NodeName:no-preload-468539 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:14:48.350350  383776 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-468539"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:14:48.350413  383776 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 06:14:48.358354  383776 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:14:48.358425  383776 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:14:48.366792  383776 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1210 06:14:48.379206  383776 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 06:14:48.391357  383776 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1210 06:14:48.403949  383776 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:14:48.407573  383776 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:14:48.417141  383776 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:14:48.497946  383776 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:14:48.521543  383776 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/no-preload-468539 for IP: 192.168.94.2
	I1210 06:14:48.521565  383776 certs.go:195] generating shared ca certs ...
	I1210 06:14:48.521584  383776 certs.go:227] acquiring lock for ca certs: {Name:mka90f54d579d39a8508aa46a6cef002ccad5d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:14:48.521743  383776 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key
	I1210 06:14:48.521806  383776 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key
	I1210 06:14:48.521820  383776 certs.go:257] generating profile certs ...
	I1210 06:14:48.521922  383776 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/no-preload-468539/client.key
	I1210 06:14:48.521992  383776 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/no-preload-468539/apiserver.key.e78c3671
	I1210 06:14:48.522040  383776 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/no-preload-468539/proxy-client.key
	I1210 06:14:48.522197  383776 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253.pem (1338 bytes)
	W1210 06:14:48.522239  383776 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253_empty.pem, impossibly tiny 0 bytes
	I1210 06:14:48.522252  383776 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:14:48.522286  383776 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:14:48.522319  383776 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:14:48.522354  383776 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem (1679 bytes)
	I1210 06:14:48.522410  383776 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:14:48.523275  383776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:14:48.542678  383776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:14:48.561364  383776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:14:48.581292  383776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:14:48.605253  383776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/no-preload-468539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:14:48.627817  383776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/no-preload-468539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:14:48.645324  383776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/no-preload-468539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:14:48.663571  383776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/no-preload-468539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 06:14:48.681633  383776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /usr/share/ca-certificates/92532.pem (1708 bytes)
	I1210 06:14:48.700299  383776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:14:48.719401  383776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253.pem --> /usr/share/ca-certificates/9253.pem (1338 bytes)
	I1210 06:14:48.737822  383776 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:14:48.752621  383776 ssh_runner.go:195] Run: openssl version
	I1210 06:14:48.761576  383776 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/92532.pem
	I1210 06:14:48.770938  383776 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/92532.pem /etc/ssl/certs/92532.pem
	I1210 06:14:48.780634  383776 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92532.pem
	I1210 06:14:48.785339  383776 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:37 /usr/share/ca-certificates/92532.pem
	I1210 06:14:48.785385  383776 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92532.pem
	I1210 06:14:48.842949  383776 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:14:48.853278  383776 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:14:48.862863  383776 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:14:48.876238  383776 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:14:48.882101  383776 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:14:48.882313  383776 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:14:48.944699  383776 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:14:48.956037  383776 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9253.pem
	I1210 06:14:48.965887  383776 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9253.pem /etc/ssl/certs/9253.pem
	I1210 06:14:48.975196  383776 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9253.pem
	I1210 06:14:48.979943  383776 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:37 /usr/share/ca-certificates/9253.pem
	I1210 06:14:48.979995  383776 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9253.pem
	I1210 06:14:49.033526  383776 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:14:49.044021  383776 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:14:49.049472  383776 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:14:49.109982  383776 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:14:49.172502  383776 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:14:49.245491  383776 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:14:49.316642  383776 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:14:49.380496  383776 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:14:49.440091  383776 kubeadm.go:401] StartCluster: {Name:no-preload-468539 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-468539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:14:49.440295  383776 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:14:49.440416  383776 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:14:49.474117  383776 cri.go:89] found id: "986b3c2f0cda833eb6ebd4b6f5458a0e267bb8b83d3a119c68be6281e7585474"
	I1210 06:14:49.474147  383776 cri.go:89] found id: "87175e8498ad3223a893f9948444ea564e4f493dc0ce2a68eed9c2e36f356f00"
	I1210 06:14:49.474154  383776 cri.go:89] found id: "ec6692c835d1d4b482f3d9e22fd61d623beb739ec5760b5e0b356cba3798f5ef"
	I1210 06:14:49.474159  383776 cri.go:89] found id: "c134cc07c343ee0eec86fdc21ea9f07ab5dc05344377ced872b852a9c514a84c"
	I1210 06:14:49.474164  383776 cri.go:89] found id: ""
	I1210 06:14:49.474218  383776 ssh_runner.go:195] Run: sudo runc list -f json
	W1210 06:14:49.488161  383776 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:14:49Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:14:49.488242  383776 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:14:49.496286  383776 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:14:49.496305  383776 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:14:49.496350  383776 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:14:49.504825  383776 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:14:49.506003  383776 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-468539" does not appear in /home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:14:49.507025  383776 kubeconfig.go:62] /home/jenkins/minikube-integration/22094-5725/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-468539" cluster setting kubeconfig missing "no-preload-468539" context setting]
	I1210 06:14:49.508365  383776 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/kubeconfig: {Name:mkfa60e97179780abb80cddf99075aed36301884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:14:49.510698  383776 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:14:49.519575  383776 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1210 06:14:49.519606  383776 kubeadm.go:602] duration metric: took 23.295226ms to restartPrimaryControlPlane
	I1210 06:14:49.519617  383776 kubeadm.go:403] duration metric: took 79.549016ms to StartCluster
	I1210 06:14:49.519641  383776 settings.go:142] acquiring lock: {Name:mk8c38e27b37253ca8cb2a2adf6342f0db270902 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:14:49.519700  383776 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:14:49.521475  383776 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/kubeconfig: {Name:mkfa60e97179780abb80cddf99075aed36301884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:14:49.521730  383776 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:14:49.521956  383776 config.go:182] Loaded profile config "no-preload-468539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:14:49.521948  383776 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:14:49.522045  383776 addons.go:70] Setting storage-provisioner=true in profile "no-preload-468539"
	I1210 06:14:49.522062  383776 addons.go:239] Setting addon storage-provisioner=true in "no-preload-468539"
	W1210 06:14:49.522070  383776 addons.go:248] addon storage-provisioner should already be in state true
	I1210 06:14:49.522203  383776 addons.go:70] Setting dashboard=true in profile "no-preload-468539"
	I1210 06:14:49.522216  383776 addons.go:239] Setting addon dashboard=true in "no-preload-468539"
	W1210 06:14:49.522223  383776 addons.go:248] addon dashboard should already be in state true
	I1210 06:14:49.522251  383776 host.go:66] Checking if "no-preload-468539" exists ...
	I1210 06:14:49.522271  383776 addons.go:70] Setting default-storageclass=true in profile "no-preload-468539"
	I1210 06:14:49.522290  383776 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-468539"
	I1210 06:14:49.522625  383776 cli_runner.go:164] Run: docker container inspect no-preload-468539 --format={{.State.Status}}
	I1210 06:14:49.522705  383776 host.go:66] Checking if "no-preload-468539" exists ...
	I1210 06:14:49.522760  383776 cli_runner.go:164] Run: docker container inspect no-preload-468539 --format={{.State.Status}}
	I1210 06:14:49.523212  383776 cli_runner.go:164] Run: docker container inspect no-preload-468539 --format={{.State.Status}}
	I1210 06:14:49.523945  383776 out.go:179] * Verifying Kubernetes components...
	I1210 06:14:49.525314  383776 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:14:49.553441  383776 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:14:49.554480  383776 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:14:49.554505  383776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:14:49.554698  383776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-468539
	I1210 06:14:49.556681  383776 addons.go:239] Setting addon default-storageclass=true in "no-preload-468539"
	W1210 06:14:49.556739  383776 addons.go:248] addon default-storageclass should already be in state true
	I1210 06:14:49.556802  383776 host.go:66] Checking if "no-preload-468539" exists ...
	I1210 06:14:49.557381  383776 cli_runner.go:164] Run: docker container inspect no-preload-468539 --format={{.State.Status}}
	I1210 06:14:49.560435  383776 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 06:14:49.561654  383776 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	
	
	==> CRI-O <==
	Dec 10 06:14:20 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:20.534066543Z" level=info msg="Started container" PID=1744 containerID=7d4237a6b27222f68fcb12e7515cc737804f01bfde3e9d158a0276390ee05e55 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dhsb6/dashboard-metrics-scraper id=5e09033d-058d-4e9f-8315-4b2a9b9c5741 name=/runtime.v1.RuntimeService/StartContainer sandboxID=052f28d6d942525bc4843b7441d2a436489d78396e28047b7257a883300d55da
	Dec 10 06:14:21 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:21.491167342Z" level=info msg="Removing container: 8ef4a9d5ced39d9be850254a90c2b7de6b85411f25a296b8fcc018cbd9858e6e" id=fbf32c77-26aa-40fe-a22a-cb01e70b2416 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:14:21 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:21.501969792Z" level=info msg="Removed container 8ef4a9d5ced39d9be850254a90c2b7de6b85411f25a296b8fcc018cbd9858e6e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dhsb6/dashboard-metrics-scraper" id=fbf32c77-26aa-40fe-a22a-cb01e70b2416 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:14:32 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:32.519785864Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=900be59b-4e3a-474b-905a-ae4e9d080060 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:14:32 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:32.520968363Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=49cba5d2-3ed4-4e5e-b9eb-3c921efb1f1e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:14:32 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:32.521954951Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=feaf59e1-57e1-4ffa-af8a-e01aeae6973b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:14:32 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:32.52209247Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:14:32 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:32.52630982Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:14:32 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:32.526478945Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8c71644baf890610956c58e5ef7646d1f05c7bafc1fc4801098fc60460bc40ed/merged/etc/passwd: no such file or directory"
	Dec 10 06:14:32 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:32.526510205Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8c71644baf890610956c58e5ef7646d1f05c7bafc1fc4801098fc60460bc40ed/merged/etc/group: no such file or directory"
	Dec 10 06:14:32 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:32.526761939Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:14:32 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:32.551380034Z" level=info msg="Created container 39a2f71da71fe7c1de3850b3a9c51ed384745691f28a2032e402aa21b568b232: kube-system/storage-provisioner/storage-provisioner" id=feaf59e1-57e1-4ffa-af8a-e01aeae6973b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:14:32 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:32.551844297Z" level=info msg="Starting container: 39a2f71da71fe7c1de3850b3a9c51ed384745691f28a2032e402aa21b568b232" id=a5df2216-a69f-4b93-9f84-74757d726036 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:14:32 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:32.55351525Z" level=info msg="Started container" PID=1760 containerID=39a2f71da71fe7c1de3850b3a9c51ed384745691f28a2032e402aa21b568b232 description=kube-system/storage-provisioner/storage-provisioner id=a5df2216-a69f-4b93-9f84-74757d726036 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c93072aae323b32ad98323401b19d1257340958ff27d0627b1fa1000cb6a830e
	Dec 10 06:14:35 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:35.380825864Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9a9e3c90-328d-4924-bff8-0e2e778fe853 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:14:35 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:35.381766699Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1dc8211c-9f19-45ca-89cd-7f4f98ac6276 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:14:35 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:35.382614761Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dhsb6/dashboard-metrics-scraper" id=56d1822e-8279-469f-bbc9-815b19a5fe0d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:14:35 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:35.382734551Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:14:35 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:35.387790052Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:14:35 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:35.388246336Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:14:35 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:35.415122184Z" level=info msg="Created container 63f93756ec3e560abf24b3a96a103ac572af56959a7e1630a2df5c20bfe381d7: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dhsb6/dashboard-metrics-scraper" id=56d1822e-8279-469f-bbc9-815b19a5fe0d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:14:35 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:35.41560045Z" level=info msg="Starting container: 63f93756ec3e560abf24b3a96a103ac572af56959a7e1630a2df5c20bfe381d7" id=4c30b1c5-149d-475f-9ea7-c46c6b0ee6e4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:14:35 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:35.417435472Z" level=info msg="Started container" PID=1778 containerID=63f93756ec3e560abf24b3a96a103ac572af56959a7e1630a2df5c20bfe381d7 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dhsb6/dashboard-metrics-scraper id=4c30b1c5-149d-475f-9ea7-c46c6b0ee6e4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=052f28d6d942525bc4843b7441d2a436489d78396e28047b7257a883300d55da
	Dec 10 06:14:35 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:35.533172971Z" level=info msg="Removing container: 7d4237a6b27222f68fcb12e7515cc737804f01bfde3e9d158a0276390ee05e55" id=1090581d-ec72-4be3-9398-a90adc926566 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:14:35 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:35.543017113Z" level=info msg="Removed container 7d4237a6b27222f68fcb12e7515cc737804f01bfde3e9d158a0276390ee05e55: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dhsb6/dashboard-metrics-scraper" id=1090581d-ec72-4be3-9398-a90adc926566 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	63f93756ec3e5       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago      Exited              dashboard-metrics-scraper   2                   052f28d6d9425       dashboard-metrics-scraper-5f989dc9cf-dhsb6       kubernetes-dashboard
	39a2f71da71fe       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           17 seconds ago      Running             storage-provisioner         1                   c93072aae323b       storage-provisioner                              kube-system
	0b17c77ccaaf0       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   34 seconds ago      Running             kubernetes-dashboard        0                   443646f09ecf1       kubernetes-dashboard-8694d4445c-8jvqp            kubernetes-dashboard
	3ec52d6fd6de5       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           48 seconds ago      Running             coredns                     0                   fd8b94fd03e0e       coredns-5dd5756b68-vxb6d                         kube-system
	cd3165120c2c5       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           48 seconds ago      Running             busybox                     1                   b3760044655bb       busybox                                          default
	5d02e309047c0       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           48 seconds ago      Running             kindnet-cni                 0                   5ef13afa20641       kindnet-5zsjn                                    kube-system
	9e0c7c4b5d625       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           48 seconds ago      Exited              storage-provisioner         0                   c93072aae323b       storage-provisioner                              kube-system
	5990f9b53cfb7       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           48 seconds ago      Running             kube-proxy                  0                   8e8663b05c653       kube-proxy-m59j8                                 kube-system
	217c2500f89f7       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           52 seconds ago      Running             etcd                        0                   cca76a9a67783       etcd-old-k8s-version-725426                      kube-system
	157dd67e0dd14       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           52 seconds ago      Running             kube-apiserver              0                   edd3e82f6018b       kube-apiserver-old-k8s-version-725426            kube-system
	8f5281037f2c4       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           52 seconds ago      Running             kube-controller-manager     0                   8f1996e01ab0c       kube-controller-manager-old-k8s-version-725426   kube-system
	e4a03ac2f7438       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           52 seconds ago      Running             kube-scheduler              0                   d00e0885a5ce1       kube-scheduler-old-k8s-version-725426            kube-system
	
	
	==> coredns [3ec52d6fd6de5971b9c5c66dabd9b83677b9c3bb23ccec3b078f383aee8c9fbe] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:58017 - 9941 "HINFO IN 3691086577375494435.4746766571204173882. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.06497998s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-725426
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-725426
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602
	                    minikube.k8s.io/name=old-k8s-version-725426
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_12_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:12:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-725426
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:14:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:14:31 +0000   Wed, 10 Dec 2025 06:12:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:14:31 +0000   Wed, 10 Dec 2025 06:12:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:14:31 +0000   Wed, 10 Dec 2025 06:12:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 06:14:31 +0000   Wed, 10 Dec 2025 06:13:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-725426
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                b7d5f572-8473-408c-855f-67c8fb07b4fa
	  Boot ID:                    b1b789e7-29ca-41f0-9541-8c4ef16372aa
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-5dd5756b68-vxb6d                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-old-k8s-version-725426                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-5zsjn                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-old-k8s-version-725426             250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-old-k8s-version-725426    200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-m59j8                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-old-k8s-version-725426             100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-dhsb6        0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-8jvqp             0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 104s                 kube-proxy       
	  Normal  Starting                 48s                  kube-proxy       
	  Normal  Starting                 2m3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m3s (x8 over 2m3s)  kubelet          Node old-k8s-version-725426 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s (x8 over 2m3s)  kubelet          Node old-k8s-version-725426 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s (x8 over 2m3s)  kubelet          Node old-k8s-version-725426 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    117s                 kubelet          Node old-k8s-version-725426 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  117s                 kubelet          Node old-k8s-version-725426 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     117s                 kubelet          Node old-k8s-version-725426 status is now: NodeHasSufficientPID
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           105s                 node-controller  Node old-k8s-version-725426 event: Registered Node old-k8s-version-725426 in Controller
	  Normal  NodeReady                92s                  kubelet          Node old-k8s-version-725426 status is now: NodeReady
	  Normal  Starting                 53s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  53s (x8 over 53s)    kubelet          Node old-k8s-version-725426 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s (x8 over 53s)    kubelet          Node old-k8s-version-725426 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s (x8 over 53s)    kubelet          Node old-k8s-version-725426 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           38s                  node-controller  Node old-k8s-version-725426 event: Registered Node old-k8s-version-725426 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e ac 6a 3a 10 14 08 06
	[  +0.000389] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e1 45 1e 59 dc 08 06
	[ +12.231886] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff aa b6 c3 b5 b8 e1 08 06
	[  +0.018522] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff f6 5b 96 ba 91 6c 08 06
	[Dec10 06:13] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 91 ba 36 9a ae 08 06
	[  +0.002987] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 7f a1 c5 f7 73 08 06
	[  +1.205570] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 04 b2 ab d7 49 08 06
	[  +4.623767] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 10 2d 23 5f e6 08 06
	[  +0.000315] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 5b 96 ba 91 6c 08 06
	[ +12.537493] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 fa d0 2a 46 66 08 06
	[  +0.000395] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 16 04 b2 ab d7 49 08 06
	[ +31.413502] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 1b 61 8f e3 57 08 06
	[  +0.000352] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fa 91 ba 36 9a ae 08 06
	
	
	==> etcd [217c2500f89f71d1324ffbf4ed5b1db6ba6968887bda00d70e62b6c6b61b2d9c] <==
	{"level":"info","ts":"2025-12-10T06:13:58.031064Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-10T06:13:58.030435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-10T06:13:58.028895Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-12-10T06:13:58.031741Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-12-10T06:13:58.032519Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-10T06:13:58.032574Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-10T06:13:58.036682Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-10T06:13:58.036922Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-10T06:13:58.036981Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-10T06:13:58.037044Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-10T06:13:58.037071Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-10T06:13:59.217592Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-10T06:13:59.217635Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-10T06:13:59.217652Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-10T06:13:59.217668Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-10T06:13:59.217676Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-10T06:13:59.217688Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-10T06:13:59.217698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-10T06:13:59.219282Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-10T06:13:59.2193Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-10T06:13:59.219293Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-725426 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-10T06:13:59.219525Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-10T06:13:59.219622Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-10T06:13:59.220452Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-10T06:13:59.220571Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 06:14:50 up 57 min,  0 user,  load average: 5.18, 4.48, 2.84
	Linux old-k8s-version-725426 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5d02e309047c074c7ea66ed67d4e89f47b261b4f75b4e00eb2cb3070da54fe1c] <==
	I1210 06:14:02.540847       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 06:14:02.541329       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1210 06:14:02.541505       1 main.go:148] setting mtu 1500 for CNI 
	I1210 06:14:02.541531       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 06:14:02.541544       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T06:14:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 06:14:02.814596       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 06:14:02.814732       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 06:14:02.814747       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 06:14:02.815434       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 06:14:03.219316       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 06:14:03.219353       1 metrics.go:72] Registering metrics
	I1210 06:14:03.219425       1 controller.go:711] "Syncing nftables rules"
	I1210 06:14:12.814279       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 06:14:12.814352       1 main.go:301] handling current node
	I1210 06:14:22.814652       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 06:14:22.814690       1 main.go:301] handling current node
	I1210 06:14:32.814357       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 06:14:32.814386       1 main.go:301] handling current node
	I1210 06:14:42.816281       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 06:14:42.816322       1 main.go:301] handling current node
	
	
	==> kube-apiserver [157dd67e0dd14a9973e3a0ca206bd7d0544b492de0e9c6fb754a5a6046365641] <==
	I1210 06:14:00.396775       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1210 06:14:00.397207       1 aggregator.go:166] initial CRD sync complete...
	I1210 06:14:00.397223       1 autoregister_controller.go:141] Starting autoregister controller
	I1210 06:14:00.397231       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 06:14:00.397241       1 cache.go:39] Caches are synced for autoregister controller
	I1210 06:14:00.397562       1 shared_informer.go:318] Caches are synced for configmaps
	I1210 06:14:00.397621       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1210 06:14:00.397629       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1210 06:14:00.397875       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1210 06:14:00.400421       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1210 06:14:00.400456       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E1210 06:14:00.408642       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1210 06:14:00.427099       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:14:00.462850       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1210 06:14:01.300485       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 06:14:01.368110       1 controller.go:624] quota admission added evaluator for: namespaces
	I1210 06:14:01.418022       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1210 06:14:01.436486       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:14:01.445837       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:14:01.453490       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1210 06:14:01.487436       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.170.212"}
	I1210 06:14:01.498971       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.18.54"}
	I1210 06:14:12.610632       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1210 06:14:12.648581       1 controller.go:624] quota admission added evaluator for: endpoints
	I1210 06:14:12.805431       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [8f5281037f2c44ec8cb539eef6c1a25935bd970e552a1b7795809e847a03d5ca] <==
	I1210 06:14:12.792340       1 event.go:307] "Event occurred" object="old-k8s-version-725426" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-725426 event: Registered Node old-k8s-version-725426 in Controller"
	I1210 06:14:12.792185       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1210 06:14:12.792888       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="old-k8s-version-725426"
	I1210 06:14:12.792994       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1210 06:14:12.793700       1 shared_informer.go:318] Caches are synced for daemon sets
	I1210 06:14:12.795918       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1210 06:14:12.814470       1 shared_informer.go:318] Caches are synced for node
	I1210 06:14:12.814566       1 range_allocator.go:174] "Sending events to api server"
	I1210 06:14:12.814619       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1210 06:14:12.814628       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1210 06:14:12.814637       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1210 06:14:12.843822       1 shared_informer.go:318] Caches are synced for TTL
	I1210 06:14:12.848028       1 shared_informer.go:318] Caches are synced for persistent volume
	I1210 06:14:13.180493       1 shared_informer.go:318] Caches are synced for garbage collector
	I1210 06:14:13.222967       1 shared_informer.go:318] Caches are synced for garbage collector
	I1210 06:14:13.223013       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1210 06:14:16.494195       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="8.307039ms"
	I1210 06:14:16.494333       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="64.479µs"
	I1210 06:14:20.500273       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.293µs"
	I1210 06:14:21.501272       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="126.301µs"
	I1210 06:14:22.507489       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="131.551µs"
	I1210 06:14:33.659544       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.172331ms"
	I1210 06:14:33.659673       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.828µs"
	I1210 06:14:35.543762       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.71µs"
	I1210 06:14:42.943995       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="80.353µs"
	
	
	==> kube-proxy [5990f9b53cfb7195fb05941363f517e271964705fb893f41bcded21a1b4fc06e] <==
	I1210 06:14:02.374362       1 server_others.go:69] "Using iptables proxy"
	I1210 06:14:02.387596       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1210 06:14:02.414068       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:14:02.420850       1 server_others.go:152] "Using iptables Proxier"
	I1210 06:14:02.420898       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1210 06:14:02.420917       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1210 06:14:02.420958       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1210 06:14:02.421272       1 server.go:846] "Version info" version="v1.28.0"
	I1210 06:14:02.421345       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:14:02.422180       1 config.go:315] "Starting node config controller"
	I1210 06:14:02.422255       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1210 06:14:02.422754       1 config.go:188] "Starting service config controller"
	I1210 06:14:02.422781       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1210 06:14:02.422803       1 config.go:97] "Starting endpoint slice config controller"
	I1210 06:14:02.422807       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1210 06:14:02.522692       1 shared_informer.go:318] Caches are synced for node config
	I1210 06:14:02.523772       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1210 06:14:02.523792       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [e4a03ac2f7438f6d74706fce0fe8f58a58512a1dac9e6fcff2e15b6523469282] <==
	I1210 06:13:58.587152       1 serving.go:348] Generated self-signed cert in-memory
	I1210 06:14:00.403215       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1210 06:14:00.403246       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:14:00.410074       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1210 06:14:00.410289       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:14:00.410390       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1210 06:14:00.410234       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1210 06:14:00.411107       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1210 06:14:00.410420       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1210 06:14:00.412661       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1210 06:14:00.412921       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1210 06:14:00.511937       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1210 06:14:00.511940       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1210 06:14:00.511948       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	
	
	==> kubelet <==
	Dec 10 06:14:12 old-k8s-version-725426 kubelet[717]: I1210 06:14:12.632482     717 topology_manager.go:215] "Topology Admit Handler" podUID="0db0eaef-daad-4d60-ad42-c7be8937f192" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-dhsb6"
	Dec 10 06:14:12 old-k8s-version-725426 kubelet[717]: I1210 06:14:12.803618     717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d6be06f9-987c-423e-8476-bd6ee21c0520-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-8jvqp\" (UID: \"d6be06f9-987c-423e-8476-bd6ee21c0520\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-8jvqp"
	Dec 10 06:14:12 old-k8s-version-725426 kubelet[717]: I1210 06:14:12.803689     717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0db0eaef-daad-4d60-ad42-c7be8937f192-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-dhsb6\" (UID: \"0db0eaef-daad-4d60-ad42-c7be8937f192\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dhsb6"
	Dec 10 06:14:12 old-k8s-version-725426 kubelet[717]: I1210 06:14:12.803846     717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phddr\" (UniqueName: \"kubernetes.io/projected/d6be06f9-987c-423e-8476-bd6ee21c0520-kube-api-access-phddr\") pod \"kubernetes-dashboard-8694d4445c-8jvqp\" (UID: \"d6be06f9-987c-423e-8476-bd6ee21c0520\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-8jvqp"
	Dec 10 06:14:12 old-k8s-version-725426 kubelet[717]: I1210 06:14:12.803907     717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvcr2\" (UniqueName: \"kubernetes.io/projected/0db0eaef-daad-4d60-ad42-c7be8937f192-kube-api-access-zvcr2\") pod \"dashboard-metrics-scraper-5f989dc9cf-dhsb6\" (UID: \"0db0eaef-daad-4d60-ad42-c7be8937f192\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dhsb6"
	Dec 10 06:14:16 old-k8s-version-725426 kubelet[717]: I1210 06:14:16.485783     717 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-8jvqp" podStartSLOduration=1.069236846 podCreationTimestamp="2025-12-10 06:14:12 +0000 UTC" firstStartedPulling="2025-12-10 06:14:12.953514723 +0000 UTC m=+15.738490139" lastFinishedPulling="2025-12-10 06:14:16.36998249 +0000 UTC m=+19.154957915" observedRunningTime="2025-12-10 06:14:16.485360445 +0000 UTC m=+19.270335883" watchObservedRunningTime="2025-12-10 06:14:16.485704622 +0000 UTC m=+19.270680060"
	Dec 10 06:14:20 old-k8s-version-725426 kubelet[717]: I1210 06:14:20.485405     717 scope.go:117] "RemoveContainer" containerID="8ef4a9d5ced39d9be850254a90c2b7de6b85411f25a296b8fcc018cbd9858e6e"
	Dec 10 06:14:21 old-k8s-version-725426 kubelet[717]: I1210 06:14:21.489826     717 scope.go:117] "RemoveContainer" containerID="8ef4a9d5ced39d9be850254a90c2b7de6b85411f25a296b8fcc018cbd9858e6e"
	Dec 10 06:14:21 old-k8s-version-725426 kubelet[717]: I1210 06:14:21.490002     717 scope.go:117] "RemoveContainer" containerID="7d4237a6b27222f68fcb12e7515cc737804f01bfde3e9d158a0276390ee05e55"
	Dec 10 06:14:21 old-k8s-version-725426 kubelet[717]: E1210 06:14:21.490422     717 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-dhsb6_kubernetes-dashboard(0db0eaef-daad-4d60-ad42-c7be8937f192)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dhsb6" podUID="0db0eaef-daad-4d60-ad42-c7be8937f192"
	Dec 10 06:14:22 old-k8s-version-725426 kubelet[717]: I1210 06:14:22.494806     717 scope.go:117] "RemoveContainer" containerID="7d4237a6b27222f68fcb12e7515cc737804f01bfde3e9d158a0276390ee05e55"
	Dec 10 06:14:22 old-k8s-version-725426 kubelet[717]: E1210 06:14:22.495230     717 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-dhsb6_kubernetes-dashboard(0db0eaef-daad-4d60-ad42-c7be8937f192)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dhsb6" podUID="0db0eaef-daad-4d60-ad42-c7be8937f192"
	Dec 10 06:14:23 old-k8s-version-725426 kubelet[717]: I1210 06:14:23.496976     717 scope.go:117] "RemoveContainer" containerID="7d4237a6b27222f68fcb12e7515cc737804f01bfde3e9d158a0276390ee05e55"
	Dec 10 06:14:23 old-k8s-version-725426 kubelet[717]: E1210 06:14:23.497359     717 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-dhsb6_kubernetes-dashboard(0db0eaef-daad-4d60-ad42-c7be8937f192)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dhsb6" podUID="0db0eaef-daad-4d60-ad42-c7be8937f192"
	Dec 10 06:14:32 old-k8s-version-725426 kubelet[717]: I1210 06:14:32.519270     717 scope.go:117] "RemoveContainer" containerID="9e0c7c4b5d6256e465920477a0bbc62ffede25d7f1093bad071e1332673719ed"
	Dec 10 06:14:35 old-k8s-version-725426 kubelet[717]: I1210 06:14:35.380279     717 scope.go:117] "RemoveContainer" containerID="7d4237a6b27222f68fcb12e7515cc737804f01bfde3e9d158a0276390ee05e55"
	Dec 10 06:14:35 old-k8s-version-725426 kubelet[717]: I1210 06:14:35.531894     717 scope.go:117] "RemoveContainer" containerID="7d4237a6b27222f68fcb12e7515cc737804f01bfde3e9d158a0276390ee05e55"
	Dec 10 06:14:35 old-k8s-version-725426 kubelet[717]: I1210 06:14:35.532168     717 scope.go:117] "RemoveContainer" containerID="63f93756ec3e560abf24b3a96a103ac572af56959a7e1630a2df5c20bfe381d7"
	Dec 10 06:14:35 old-k8s-version-725426 kubelet[717]: E1210 06:14:35.532566     717 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-dhsb6_kubernetes-dashboard(0db0eaef-daad-4d60-ad42-c7be8937f192)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dhsb6" podUID="0db0eaef-daad-4d60-ad42-c7be8937f192"
	Dec 10 06:14:42 old-k8s-version-725426 kubelet[717]: I1210 06:14:42.934873     717 scope.go:117] "RemoveContainer" containerID="63f93756ec3e560abf24b3a96a103ac572af56959a7e1630a2df5c20bfe381d7"
	Dec 10 06:14:42 old-k8s-version-725426 kubelet[717]: E1210 06:14:42.935170     717 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-dhsb6_kubernetes-dashboard(0db0eaef-daad-4d60-ad42-c7be8937f192)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dhsb6" podUID="0db0eaef-daad-4d60-ad42-c7be8937f192"
	Dec 10 06:14:47 old-k8s-version-725426 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 06:14:47 old-k8s-version-725426 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 06:14:47 old-k8s-version-725426 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:14:47 old-k8s-version-725426 systemd[1]: kubelet.service: Consumed 1.485s CPU time.
	
	
	==> kubernetes-dashboard [0b17c77ccaaf0facf4210e4593ca7afa37dcce6d104183e98e0ff909ab0e54f1] <==
	2025/12/10 06:14:16 Starting overwatch
	2025/12/10 06:14:16 Using namespace: kubernetes-dashboard
	2025/12/10 06:14:16 Using in-cluster config to connect to apiserver
	2025/12/10 06:14:16 Using secret token for csrf signing
	2025/12/10 06:14:16 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/10 06:14:16 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/10 06:14:16 Successful initial request to the apiserver, version: v1.28.0
	2025/12/10 06:14:16 Generating JWE encryption key
	2025/12/10 06:14:16 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/10 06:14:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/10 06:14:16 Initializing JWE encryption key from synchronized object
	2025/12/10 06:14:16 Creating in-cluster Sidecar client
	2025/12/10 06:14:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 06:14:16 Serving insecurely on HTTP port: 9090
	2025/12/10 06:14:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [39a2f71da71fe7c1de3850b3a9c51ed384745691f28a2032e402aa21b568b232] <==
	I1210 06:14:32.565934       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 06:14:32.572608       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 06:14:32.572642       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1210 06:14:49.973224       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 06:14:49.973404       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-725426_c777854a-ab64-4118-a65e-f1c661b70492!
	I1210 06:14:49.973370       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c2524e9c-7625-4e55-9d2f-d2c7b14c23d5", APIVersion:"v1", ResourceVersion:"664", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-725426_c777854a-ab64-4118-a65e-f1c661b70492 became leader
	I1210 06:14:50.074397       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-725426_c777854a-ab64-4118-a65e-f1c661b70492!
	
	
	==> storage-provisioner [9e0c7c4b5d6256e465920477a0bbc62ffede25d7f1093bad071e1332673719ed] <==
	I1210 06:14:02.343352       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1210 06:14:32.351433       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-725426 -n old-k8s-version-725426
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-725426 -n old-k8s-version-725426: exit status 2 (386.183529ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-725426 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-725426
helpers_test.go:244: (dbg) docker inspect old-k8s-version-725426:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "565a7417ad854a7ed35617a365dca829fa25f8d3be3eaf17a40b74828ab57ef1",
	        "Created": "2025-12-10T06:12:38.650542481Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 369369,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:13:49.73005669Z",
	            "FinishedAt": "2025-12-10T06:13:48.47232493Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/565a7417ad854a7ed35617a365dca829fa25f8d3be3eaf17a40b74828ab57ef1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/565a7417ad854a7ed35617a365dca829fa25f8d3be3eaf17a40b74828ab57ef1/hostname",
	        "HostsPath": "/var/lib/docker/containers/565a7417ad854a7ed35617a365dca829fa25f8d3be3eaf17a40b74828ab57ef1/hosts",
	        "LogPath": "/var/lib/docker/containers/565a7417ad854a7ed35617a365dca829fa25f8d3be3eaf17a40b74828ab57ef1/565a7417ad854a7ed35617a365dca829fa25f8d3be3eaf17a40b74828ab57ef1-json.log",
	        "Name": "/old-k8s-version-725426",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-725426:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-725426",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "565a7417ad854a7ed35617a365dca829fa25f8d3be3eaf17a40b74828ab57ef1",
	                "LowerDir": "/var/lib/docker/overlay2/7b40167ff82cc93446db6a2604157bc0e7ce9a2383a73efb4fa6f25634d0e151-init/diff:/var/lib/docker/overlay2/b62e2f8db4877fd6b32453256d2aeab173581bfdfbed6c87a5c3b6dd49dbb983/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7b40167ff82cc93446db6a2604157bc0e7ce9a2383a73efb4fa6f25634d0e151/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7b40167ff82cc93446db6a2604157bc0e7ce9a2383a73efb4fa6f25634d0e151/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7b40167ff82cc93446db6a2604157bc0e7ce9a2383a73efb4fa6f25634d0e151/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-725426",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-725426/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-725426",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-725426",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-725426",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f7730463a17a565f4431df425a31cf56484f395d7d2e36babbbd4476b8a2a44e",
	            "SandboxKey": "/var/run/docker/netns/f7730463a17a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-725426": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b1ead66c643ddb232e3817c16b9e356f55b33d7d7d004331db07c60da2882eda",
	                    "EndpointID": "c72408290572b6219441ca27b4522c9778eeade7f89c4aeebebe39b3ab507efd",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "1a:26:13:06:87:ab",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-725426",
	                        "565a7417ad85"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-725426 -n old-k8s-version-725426
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-725426 -n old-k8s-version-725426: exit status 2 (321.711078ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-725426 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-725426 logs -n 25: (1.119957823s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-094798 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ ssh     │ -p bridge-094798 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ ssh     │ -p bridge-094798 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo cri-dockerd --version                                                                                                                              │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ ssh     │ -p bridge-094798 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo containerd config dump                                                                                                                             │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo crio config                                                                                                                                        │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ delete  │ -p bridge-094798                                                                                                                                                         │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ delete  │ -p disable-driver-mounts-569732                                                                                                                                          │ disable-driver-mounts-569732 │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ start   │ -p default-k8s-diff-port-125336 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3 │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-468539 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ stop    │ -p no-preload-468539 --alsologtostderr -v=3                                                                                                                              │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ addons  │ enable metrics-server -p embed-certs-028500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ stop    │ -p embed-certs-028500 --alsologtostderr -v=3                                                                                                                             │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-468539 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ start   │ -p no-preload-468539 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1             │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ image   │ old-k8s-version-725426 image list --format=json                                                                                                                          │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ pause   │ -p old-k8s-version-725426 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:14:41
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:14:41.429421  383776 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:14:41.429546  383776 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:14:41.429554  383776 out.go:374] Setting ErrFile to fd 2...
	I1210 06:14:41.429561  383776 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:14:41.429777  383776 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 06:14:41.430296  383776 out.go:368] Setting JSON to false
	I1210 06:14:41.431768  383776 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3425,"bootTime":1765343856,"procs":414,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:14:41.431828  383776 start.go:143] virtualization: kvm guest
	I1210 06:14:41.433571  383776 out.go:179] * [no-preload-468539] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:14:41.435013  383776 notify.go:221] Checking for updates...
	I1210 06:14:41.435020  383776 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:14:41.436189  383776 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:14:41.437154  383776 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:14:41.438182  383776 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube
	I1210 06:14:41.439229  383776 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:14:41.440146  383776 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:14:41.441583  383776 config.go:182] Loaded profile config "no-preload-468539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:14:41.442044  383776 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:14:41.468381  383776 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:14:41.468539  383776 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:14:41.538032  383776 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:83 SystemTime:2025-12-10 06:14:41.521868374 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:14:41.538216  383776 docker.go:319] overlay module found
	I1210 06:14:41.541186  383776 out.go:179] * Using the docker driver based on existing profile
	I1210 06:14:41.542353  383776 start.go:309] selected driver: docker
	I1210 06:14:41.542374  383776 start.go:927] validating driver "docker" against &{Name:no-preload-468539 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-468539 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:14:41.542492  383776 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:14:41.543268  383776 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:14:41.609006  383776 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:83 SystemTime:2025-12-10 06:14:41.599637973 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:14:41.609289  383776 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:14:41.609322  383776 cni.go:84] Creating CNI manager for ""
	I1210 06:14:41.609384  383776 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:14:41.609422  383776 start.go:353] cluster config:
	{Name:no-preload-468539 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-468539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:14:41.610974  383776 out.go:179] * Starting "no-preload-468539" primary control-plane node in "no-preload-468539" cluster
	I1210 06:14:41.611886  383776 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:14:41.612925  383776 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:14:41.613819  383776 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1210 06:14:41.613905  383776 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 06:14:41.613911  383776 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/no-preload-468539/config.json ...
	I1210 06:14:41.614072  383776 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 06:14:41.637278  383776 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:14:41.637296  383776 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 06:14:41.637311  383776 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:14:41.637337  383776 start.go:360] acquireMachinesLock for no-preload-468539: {Name:mkf25110bcf822b894cb65642adeaf2352263d1e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:41.637394  383776 start.go:364] duration metric: took 34.884µs to acquireMachinesLock for "no-preload-468539"
	I1210 06:14:41.637410  383776 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:14:41.637415  383776 fix.go:54] fixHost starting: 
	I1210 06:14:41.637602  383776 cli_runner.go:164] Run: docker container inspect no-preload-468539 --format={{.State.Status}}
	I1210 06:14:41.655563  383776 fix.go:112] recreateIfNeeded on no-preload-468539: state=Stopped err=<nil>
	W1210 06:14:41.655587  383776 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:14:42.312146  377144 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1210 06:14:42.312215  377144 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:14:42.312341  377144 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:14:42.312414  377144 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1210 06:14:42.312466  377144 kubeadm.go:319] OS: Linux
	I1210 06:14:42.312567  377144 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:14:42.312647  377144 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:14:42.312728  377144 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:14:42.312802  377144 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:14:42.312868  377144 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:14:42.312932  377144 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:14:42.313004  377144 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:14:42.313065  377144 kubeadm.go:319] CGROUPS_IO: enabled
	I1210 06:14:42.313192  377144 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:14:42.313360  377144 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:14:42.313479  377144 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:14:42.313582  377144 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:14:42.315318  377144 out.go:252]   - Generating certificates and keys ...
	I1210 06:14:42.315416  377144 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:14:42.315491  377144 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:14:42.315563  377144 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 06:14:42.315647  377144 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 06:14:42.315735  377144 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 06:14:42.315805  377144 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 06:14:42.315889  377144 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 06:14:42.316024  377144 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-125336 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1210 06:14:42.316075  377144 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 06:14:42.316246  377144 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-125336 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1210 06:14:42.316361  377144 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 06:14:42.316432  377144 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 06:14:42.316473  377144 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 06:14:42.316527  377144 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:14:42.316572  377144 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:14:42.316622  377144 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:14:42.316667  377144 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:14:42.316728  377144 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:14:42.316785  377144 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:14:42.316863  377144 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:14:42.316924  377144 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:14:42.318037  377144 out.go:252]   - Booting up control plane ...
	I1210 06:14:42.318130  377144 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:14:42.318197  377144 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:14:42.318258  377144 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:14:42.318349  377144 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:14:42.318427  377144 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:14:42.318540  377144 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:14:42.318655  377144 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:14:42.318727  377144 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:14:42.318886  377144 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:14:42.318977  377144 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:14:42.319031  377144 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001681918s
	I1210 06:14:42.319181  377144 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 06:14:42.319309  377144 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8444/livez
	I1210 06:14:42.319444  377144 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 06:14:42.319565  377144 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 06:14:42.319675  377144 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.196157281s
	I1210 06:14:42.319787  377144 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.37391519s
	I1210 06:14:42.319871  377144 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001533642s
	I1210 06:14:42.319969  377144 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 06:14:42.320089  377144 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 06:14:42.320148  377144 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 06:14:42.320392  377144 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-125336 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 06:14:42.320447  377144 kubeadm.go:319] [bootstrap-token] Using token: hzyua2.uklciv6onhfd51v4
	I1210 06:14:42.322274  377144 out.go:252]   - Configuring RBAC rules ...
	I1210 06:14:42.322368  377144 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 06:14:42.322449  377144 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 06:14:42.322572  377144 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 06:14:42.322703  377144 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 06:14:42.322807  377144 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 06:14:42.322888  377144 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 06:14:42.322987  377144 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 06:14:42.323025  377144 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 06:14:42.323068  377144 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 06:14:42.323074  377144 kubeadm.go:319] 
	I1210 06:14:42.323183  377144 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 06:14:42.323193  377144 kubeadm.go:319] 
	I1210 06:14:42.323265  377144 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 06:14:42.323271  377144 kubeadm.go:319] 
	I1210 06:14:42.323292  377144 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 06:14:42.323346  377144 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 06:14:42.323390  377144 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 06:14:42.323395  377144 kubeadm.go:319] 
	I1210 06:14:42.323447  377144 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 06:14:42.323452  377144 kubeadm.go:319] 
	I1210 06:14:42.323492  377144 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 06:14:42.323498  377144 kubeadm.go:319] 
	I1210 06:14:42.323543  377144 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 06:14:42.323615  377144 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 06:14:42.323685  377144 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 06:14:42.323691  377144 kubeadm.go:319] 
	I1210 06:14:42.323765  377144 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 06:14:42.323830  377144 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 06:14:42.323839  377144 kubeadm.go:319] 
	I1210 06:14:42.323933  377144 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token hzyua2.uklciv6onhfd51v4 \
	I1210 06:14:42.324025  377144 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fec42d2a7c02894c4f889fb8bc31e98283f3b1a3e3609cf9160b0c24109717cc \
	I1210 06:14:42.324051  377144 kubeadm.go:319] 	--control-plane 
	I1210 06:14:42.324058  377144 kubeadm.go:319] 
	I1210 06:14:42.324154  377144 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 06:14:42.324162  377144 kubeadm.go:319] 
	I1210 06:14:42.324284  377144 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token hzyua2.uklciv6onhfd51v4 \
	I1210 06:14:42.324448  377144 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fec42d2a7c02894c4f889fb8bc31e98283f3b1a3e3609cf9160b0c24109717cc 
	I1210 06:14:42.324462  377144 cni.go:84] Creating CNI manager for ""
	I1210 06:14:42.324468  377144 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:14:42.325653  377144 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1210 06:14:41.657022  383776 out.go:252] * Restarting existing docker container for "no-preload-468539" ...
	I1210 06:14:41.657090  383776 cli_runner.go:164] Run: docker start no-preload-468539
	I1210 06:14:41.777604  383776 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 06:14:41.938780  383776 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 06:14:41.954432  383776 cli_runner.go:164] Run: docker container inspect no-preload-468539 --format={{.State.Status}}
	I1210 06:14:41.979311  383776 kic.go:430] container "no-preload-468539" state is running.
	I1210 06:14:41.979737  383776 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-468539
	I1210 06:14:42.003009  383776 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/no-preload-468539/config.json ...
	I1210 06:14:42.003291  383776 machine.go:94] provisionDockerMachine start ...
	I1210 06:14:42.003391  383776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-468539
	I1210 06:14:42.024424  383776 main.go:143] libmachine: Using SSH client type: native
	I1210 06:14:42.024731  383776 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1210 06:14:42.024749  383776 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:14:42.025356  383776 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58078->127.0.0.1:33118: read: connection reset by peer
	I1210 06:14:42.094960  383776 cache.go:107] acquiring lock: {Name:mk0763a50664c56b0862900e71862307cba94d4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:42.095067  383776 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:14:42.095108  383776 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 139.852µs
	I1210 06:14:42.095130  383776 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:14:42.095135  383776 cache.go:107] acquiring lock: {Name:mk1e61937bbcbe456972ee92ce51441d0a310af5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:42.095177  383776 cache.go:107] acquiring lock: {Name:mk615200abc7eac862a5e41cd77ae4b62bf451cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:42.095215  383776 cache.go:107] acquiring lock: {Name:mkfaee1dcd6a6f37ecb9d19fcd839a5a6d9b20e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:42.095234  383776 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 06:14:42.095244  383776 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0" took 67.478µs
	I1210 06:14:42.095254  383776 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 06:14:42.095264  383776 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 06:14:42.095279  383776 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 66.397µs
	I1210 06:14:42.095290  383776 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 06:14:42.095272  383776 cache.go:107] acquiring lock: {Name:mk76394a7d1abe4be60a9e73a4b33f52c38d5e6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:42.095296  383776 cache.go:107] acquiring lock: {Name:mke4d7efb2ee4879b97924080e0d429a33c1d765 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:42.095321  383776 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 06:14:42.095329  383776 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 60.897µs
	I1210 06:14:42.095337  383776 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 06:14:42.095150  383776 cache.go:107] acquiring lock: {Name:mkd670cede0997c7eb0e9bd388a82e1cb2741031 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:42.095345  383776 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 06:14:42.095365  383776 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:14:42.095368  383776 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 60.424µs
	I1210 06:14:42.095386  383776 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 06:14:42.095372  383776 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 225.92µs
	I1210 06:14:42.095397  383776 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:14:42.095388  383776 cache.go:107] acquiring lock: {Name:mk1df93d14c27f679df68c721474a110ecfc043b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:42.095417  383776 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 06:14:42.095425  383776 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 06:14:42.095425  383776 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 306.392µs
	I1210 06:14:42.095432  383776 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 06:14:42.095432  383776 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 46.891µs
	I1210 06:14:42.095441  383776 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 06:14:42.095449  383776 cache.go:87] Successfully saved all images to host disk.
	I1210 06:14:45.158647  383776 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-468539
	
	I1210 06:14:45.158675  383776 ubuntu.go:182] provisioning hostname "no-preload-468539"
	I1210 06:14:45.158744  383776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-468539
	I1210 06:14:45.179657  383776 main.go:143] libmachine: Using SSH client type: native
	I1210 06:14:45.179937  383776 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1210 06:14:45.179959  383776 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-468539 && echo "no-preload-468539" | sudo tee /etc/hostname
	I1210 06:14:45.322545  383776 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-468539
	
	I1210 06:14:45.322637  383776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-468539
	I1210 06:14:45.340740  383776 main.go:143] libmachine: Using SSH client type: native
	I1210 06:14:45.340968  383776 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1210 06:14:45.340984  383776 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-468539' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-468539/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-468539' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:14:45.471299  383776 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:14:45.471328  383776 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-5725/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-5725/.minikube}
	I1210 06:14:45.471359  383776 ubuntu.go:190] setting up certificates
	I1210 06:14:45.471371  383776 provision.go:84] configureAuth start
	I1210 06:14:45.471428  383776 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-468539
	I1210 06:14:45.489566  383776 provision.go:143] copyHostCerts
	I1210 06:14:45.489628  383776 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem, removing ...
	I1210 06:14:45.489641  383776 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem
	I1210 06:14:45.489723  383776 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem (1078 bytes)
	I1210 06:14:45.489849  383776 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem, removing ...
	I1210 06:14:45.489862  383776 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem
	I1210 06:14:45.489904  383776 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem (1123 bytes)
	I1210 06:14:45.490010  383776 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem, removing ...
	I1210 06:14:45.490021  383776 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem
	I1210 06:14:45.490063  383776 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem (1679 bytes)
	I1210 06:14:45.490171  383776 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem org=jenkins.no-preload-468539 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-468539]
	I1210 06:14:45.606504  383776 provision.go:177] copyRemoteCerts
	I1210 06:14:45.606572  383776 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:14:45.606613  383776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-468539
	I1210 06:14:45.624205  383776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/no-preload-468539/id_rsa Username:docker}
	I1210 06:14:45.719928  383776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:14:45.737064  383776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:14:45.753641  383776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:14:45.770537  383776 provision.go:87] duration metric: took 299.15054ms to configureAuth
	I1210 06:14:45.770560  383776 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:14:45.770722  383776 config.go:182] Loaded profile config "no-preload-468539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:14:45.770826  383776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-468539
	I1210 06:14:45.788984  383776 main.go:143] libmachine: Using SSH client type: native
	I1210 06:14:45.789241  383776 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1210 06:14:45.789268  383776 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:14:46.109572  383776 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:14:46.109617  383776 machine.go:97] duration metric: took 4.106285489s to provisionDockerMachine
	I1210 06:14:46.109629  383776 start.go:293] postStartSetup for "no-preload-468539" (driver="docker")
	I1210 06:14:46.109645  383776 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:14:46.109712  383776 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:14:46.109770  383776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-468539
	I1210 06:14:46.131467  383776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/no-preload-468539/id_rsa Username:docker}
	I1210 06:14:46.239256  383776 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:14:46.243540  383776 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:14:46.243570  383776 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:14:46.243582  383776 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/addons for local assets ...
	I1210 06:14:46.243651  383776 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/files for local assets ...
	I1210 06:14:46.243855  383776 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem -> 92532.pem in /etc/ssl/certs
	I1210 06:14:46.243991  383776 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:14:46.252467  383776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:14:46.271497  383776 start.go:296] duration metric: took 161.856057ms for postStartSetup
	I1210 06:14:46.271571  383776 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:14:46.271605  383776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-468539
	I1210 06:14:46.290735  383776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/no-preload-468539/id_rsa Username:docker}
	I1210 06:14:46.385096  383776 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:14:46.389534  383776 fix.go:56] duration metric: took 4.752113452s for fixHost
	I1210 06:14:46.389559  383776 start.go:83] releasing machines lock for "no-preload-468539", held for 4.752153053s
	I1210 06:14:46.389624  383776 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-468539
	I1210 06:14:46.407510  383776 ssh_runner.go:195] Run: cat /version.json
	I1210 06:14:46.407554  383776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-468539
	I1210 06:14:46.407603  383776 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:14:46.407679  383776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-468539
	I1210 06:14:46.424089  383776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/no-preload-468539/id_rsa Username:docker}
	I1210 06:14:46.425923  383776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/no-preload-468539/id_rsa Username:docker}
	I1210 06:14:42.326668  377144 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 06:14:42.331553  377144 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1210 06:14:42.331570  377144 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1210 06:14:42.344373  377144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 06:14:42.552457  377144 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 06:14:42.552512  377144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:42.552552  377144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-125336 minikube.k8s.io/updated_at=2025_12_10T06_14_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602 minikube.k8s.io/name=default-k8s-diff-port-125336 minikube.k8s.io/primary=true
	I1210 06:14:42.642090  377144 ops.go:34] apiserver oom_adj: -16
	I1210 06:14:42.642228  377144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:43.142730  377144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:43.642581  377144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:44.143046  377144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:44.642498  377144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:45.142314  377144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:45.643194  377144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:46.143274  377144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:46.643151  377144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:14:46.716894  377144 kubeadm.go:1114] duration metric: took 4.164449326s to wait for elevateKubeSystemPrivileges
	I1210 06:14:46.716929  377144 kubeadm.go:403] duration metric: took 15.358070049s to StartCluster
	I1210 06:14:46.716950  377144 settings.go:142] acquiring lock: {Name:mk8c38e27b37253ca8cb2a2adf6342f0db270902 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:14:46.717021  377144 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:14:46.718697  377144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/kubeconfig: {Name:mkfa60e97179780abb80cddf99075aed36301884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:14:46.719005  377144 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 06:14:46.719006  377144 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:14:46.719135  377144 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:14:46.719207  377144 config.go:182] Loaded profile config "default-k8s-diff-port-125336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:14:46.719237  377144 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-125336"
	I1210 06:14:46.719260  377144 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-125336"
	I1210 06:14:46.719287  377144 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-125336"
	I1210 06:14:46.719264  377144 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-125336"
	I1210 06:14:46.719445  377144 host.go:66] Checking if "default-k8s-diff-port-125336" exists ...
	I1210 06:14:46.719622  377144 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:14:46.719874  377144 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:14:46.720494  377144 out.go:179] * Verifying Kubernetes components...
	I1210 06:14:46.721575  377144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:14:46.743912  377144 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:14:46.744934  377144 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-125336"
	I1210 06:14:46.744979  377144 host.go:66] Checking if "default-k8s-diff-port-125336" exists ...
	I1210 06:14:46.745235  377144 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:14:46.745255  377144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:14:46.745310  377144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:14:46.745504  377144 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:14:46.774301  377144 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:14:46.774330  377144 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:14:46.774403  377144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:14:46.775233  377144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:14:46.797371  377144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:14:46.810338  377144 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 06:14:46.876372  377144 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:14:46.901986  377144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:14:46.911068  377144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:14:47.041432  377144 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1210 06:14:47.043208  377144 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-125336" to be "Ready" ...
	I1210 06:14:47.276405  377144 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1210 06:14:46.516151  383776 ssh_runner.go:195] Run: systemctl --version
	I1210 06:14:46.570658  383776 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:14:46.606971  383776 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:14:46.611525  383776 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:14:46.611576  383776 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:14:46.619317  383776 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:14:46.619338  383776 start.go:496] detecting cgroup driver to use...
	I1210 06:14:46.619368  383776 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:14:46.619403  383776 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:14:46.633289  383776 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:14:46.644582  383776 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:14:46.644630  383776 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:14:46.659173  383776 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:14:46.671613  383776 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:14:46.783157  383776 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:14:46.898880  383776 docker.go:234] disabling docker service ...
	I1210 06:14:46.898946  383776 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:14:46.918937  383776 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:14:46.935202  383776 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:14:47.065480  383776 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:14:47.182444  383776 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:14:47.196804  383776 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:14:47.211955  383776 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 06:14:47.377561  383776 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:14:47.377624  383776 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:14:47.388610  383776 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:14:47.388674  383776 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:14:47.397833  383776 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:14:47.408653  383776 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:14:47.418145  383776 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:14:47.426825  383776 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:14:47.436299  383776 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:14:47.446450  383776 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:14:47.457001  383776 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:14:47.466277  383776 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:14:47.475171  383776 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:14:47.559395  383776 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:14:47.705550  383776 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:14:47.705616  383776 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:14:47.709537  383776 start.go:564] Will wait 60s for crictl version
	I1210 06:14:47.709596  383776 ssh_runner.go:195] Run: which crictl
	I1210 06:14:47.713056  383776 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:14:47.741912  383776 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:14:47.741973  383776 ssh_runner.go:195] Run: crio --version
	I1210 06:14:47.772416  383776 ssh_runner.go:195] Run: crio --version
	I1210 06:14:47.801886  383776 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1210 06:14:47.802905  383776 cli_runner.go:164] Run: docker network inspect no-preload-468539 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:14:47.820157  383776 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1210 06:14:47.823949  383776 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:14:47.834274  383776 kubeadm.go:884] updating cluster {Name:no-preload-468539 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-468539 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:14:47.834469  383776 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 06:14:47.992158  383776 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 06:14:48.135001  383776 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 06:14:48.268202  383776 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1210 06:14:48.268255  383776 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:14:48.302310  383776 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:14:48.302333  383776 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:14:48.302344  383776 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-rc.1 crio true true} ...
	I1210 06:14:48.302474  383776 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-468539 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-468539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:14:48.302562  383776 ssh_runner.go:195] Run: crio config
	I1210 06:14:48.350151  383776 cni.go:84] Creating CNI manager for ""
	I1210 06:14:48.350172  383776 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:14:48.350186  383776 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:14:48.350208  383776 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-468539 NodeName:no-preload-468539 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:14:48.350350  383776 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-468539"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:14:48.350413  383776 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 06:14:48.358354  383776 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:14:48.358425  383776 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:14:48.366792  383776 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1210 06:14:48.379206  383776 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 06:14:48.391357  383776 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1210 06:14:48.403949  383776 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:14:48.407573  383776 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:14:48.417141  383776 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:14:48.497946  383776 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:14:48.521543  383776 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/no-preload-468539 for IP: 192.168.94.2
	I1210 06:14:48.521565  383776 certs.go:195] generating shared ca certs ...
	I1210 06:14:48.521584  383776 certs.go:227] acquiring lock for ca certs: {Name:mka90f54d579d39a8508aa46a6cef002ccad5d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:14:48.521743  383776 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key
	I1210 06:14:48.521806  383776 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key
	I1210 06:14:48.521820  383776 certs.go:257] generating profile certs ...
	I1210 06:14:48.521922  383776 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/no-preload-468539/client.key
	I1210 06:14:48.521992  383776 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/no-preload-468539/apiserver.key.e78c3671
	I1210 06:14:48.522040  383776 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/no-preload-468539/proxy-client.key
	I1210 06:14:48.522197  383776 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253.pem (1338 bytes)
	W1210 06:14:48.522239  383776 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253_empty.pem, impossibly tiny 0 bytes
	I1210 06:14:48.522252  383776 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:14:48.522286  383776 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:14:48.522319  383776 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:14:48.522354  383776 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem (1679 bytes)
	I1210 06:14:48.522410  383776 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:14:48.523275  383776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:14:48.542678  383776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:14:48.561364  383776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:14:48.581292  383776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:14:48.605253  383776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/no-preload-468539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:14:48.627817  383776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/no-preload-468539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:14:48.645324  383776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/no-preload-468539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:14:48.663571  383776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/no-preload-468539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 06:14:48.681633  383776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /usr/share/ca-certificates/92532.pem (1708 bytes)
	I1210 06:14:48.700299  383776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:14:48.719401  383776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253.pem --> /usr/share/ca-certificates/9253.pem (1338 bytes)
	I1210 06:14:48.737822  383776 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:14:48.752621  383776 ssh_runner.go:195] Run: openssl version
	I1210 06:14:48.761576  383776 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/92532.pem
	I1210 06:14:48.770938  383776 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/92532.pem /etc/ssl/certs/92532.pem
	I1210 06:14:48.780634  383776 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92532.pem
	I1210 06:14:48.785339  383776 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:37 /usr/share/ca-certificates/92532.pem
	I1210 06:14:48.785385  383776 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92532.pem
	I1210 06:14:48.842949  383776 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:14:48.853278  383776 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:14:48.862863  383776 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:14:48.876238  383776 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:14:48.882101  383776 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:14:48.882313  383776 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:14:48.944699  383776 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:14:48.956037  383776 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9253.pem
	I1210 06:14:48.965887  383776 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9253.pem /etc/ssl/certs/9253.pem
	I1210 06:14:48.975196  383776 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9253.pem
	I1210 06:14:48.979943  383776 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:37 /usr/share/ca-certificates/9253.pem
	I1210 06:14:48.979995  383776 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9253.pem
	I1210 06:14:49.033526  383776 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:14:49.044021  383776 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:14:49.049472  383776 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:14:49.109982  383776 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:14:49.172502  383776 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:14:49.245491  383776 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:14:49.316642  383776 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:14:49.380496  383776 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:14:49.440091  383776 kubeadm.go:401] StartCluster: {Name:no-preload-468539 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-468539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:14:49.440295  383776 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:14:49.440416  383776 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:14:49.474117  383776 cri.go:89] found id: "986b3c2f0cda833eb6ebd4b6f5458a0e267bb8b83d3a119c68be6281e7585474"
	I1210 06:14:49.474147  383776 cri.go:89] found id: "87175e8498ad3223a893f9948444ea564e4f493dc0ce2a68eed9c2e36f356f00"
	I1210 06:14:49.474154  383776 cri.go:89] found id: "ec6692c835d1d4b482f3d9e22fd61d623beb739ec5760b5e0b356cba3798f5ef"
	I1210 06:14:49.474159  383776 cri.go:89] found id: "c134cc07c343ee0eec86fdc21ea9f07ab5dc05344377ced872b852a9c514a84c"
	I1210 06:14:49.474164  383776 cri.go:89] found id: ""
	I1210 06:14:49.474218  383776 ssh_runner.go:195] Run: sudo runc list -f json
	W1210 06:14:49.488161  383776 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:14:49Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:14:49.488242  383776 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:14:49.496286  383776 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:14:49.496305  383776 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:14:49.496350  383776 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:14:49.504825  383776 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:14:49.506003  383776 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-468539" does not appear in /home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:14:49.507025  383776 kubeconfig.go:62] /home/jenkins/minikube-integration/22094-5725/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-468539" cluster setting kubeconfig missing "no-preload-468539" context setting]
	I1210 06:14:49.508365  383776 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/kubeconfig: {Name:mkfa60e97179780abb80cddf99075aed36301884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:14:49.510698  383776 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:14:49.519575  383776 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1210 06:14:49.519606  383776 kubeadm.go:602] duration metric: took 23.295226ms to restartPrimaryControlPlane
	I1210 06:14:49.519617  383776 kubeadm.go:403] duration metric: took 79.549016ms to StartCluster
	I1210 06:14:49.519641  383776 settings.go:142] acquiring lock: {Name:mk8c38e27b37253ca8cb2a2adf6342f0db270902 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:14:49.519700  383776 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:14:49.521475  383776 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/kubeconfig: {Name:mkfa60e97179780abb80cddf99075aed36301884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:14:49.521730  383776 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:14:49.521956  383776 config.go:182] Loaded profile config "no-preload-468539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:14:49.521948  383776 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:14:49.522045  383776 addons.go:70] Setting storage-provisioner=true in profile "no-preload-468539"
	I1210 06:14:49.522062  383776 addons.go:239] Setting addon storage-provisioner=true in "no-preload-468539"
	W1210 06:14:49.522070  383776 addons.go:248] addon storage-provisioner should already be in state true
	I1210 06:14:49.522203  383776 addons.go:70] Setting dashboard=true in profile "no-preload-468539"
	I1210 06:14:49.522216  383776 addons.go:239] Setting addon dashboard=true in "no-preload-468539"
	W1210 06:14:49.522223  383776 addons.go:248] addon dashboard should already be in state true
	I1210 06:14:49.522251  383776 host.go:66] Checking if "no-preload-468539" exists ...
	I1210 06:14:49.522271  383776 addons.go:70] Setting default-storageclass=true in profile "no-preload-468539"
	I1210 06:14:49.522290  383776 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-468539"
	I1210 06:14:49.522625  383776 cli_runner.go:164] Run: docker container inspect no-preload-468539 --format={{.State.Status}}
	I1210 06:14:49.522705  383776 host.go:66] Checking if "no-preload-468539" exists ...
	I1210 06:14:49.522760  383776 cli_runner.go:164] Run: docker container inspect no-preload-468539 --format={{.State.Status}}
	I1210 06:14:49.523212  383776 cli_runner.go:164] Run: docker container inspect no-preload-468539 --format={{.State.Status}}
	I1210 06:14:49.523945  383776 out.go:179] * Verifying Kubernetes components...
	I1210 06:14:49.525314  383776 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:14:49.553441  383776 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:14:49.554480  383776 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:14:49.554505  383776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:14:49.554698  383776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-468539
	I1210 06:14:49.556681  383776 addons.go:239] Setting addon default-storageclass=true in "no-preload-468539"
	W1210 06:14:49.556739  383776 addons.go:248] addon default-storageclass should already be in state true
	I1210 06:14:49.556802  383776 host.go:66] Checking if "no-preload-468539" exists ...
	I1210 06:14:49.557381  383776 cli_runner.go:164] Run: docker container inspect no-preload-468539 --format={{.State.Status}}
	I1210 06:14:49.560435  383776 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 06:14:49.561654  383776 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 06:14:49.562658  383776 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 06:14:49.562678  383776 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 06:14:49.562795  383776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-468539
	I1210 06:14:49.593413  383776 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:14:49.593493  383776 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:14:49.593556  383776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-468539
	I1210 06:14:49.593682  383776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/no-preload-468539/id_rsa Username:docker}
	I1210 06:14:49.605384  383776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/no-preload-468539/id_rsa Username:docker}
	I1210 06:14:49.626050  383776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/no-preload-468539/id_rsa Username:docker}
	I1210 06:14:49.706934  383776 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:14:49.725468  383776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:14:49.727051  383776 node_ready.go:35] waiting up to 6m0s for node "no-preload-468539" to be "Ready" ...
	I1210 06:14:49.730629  383776 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 06:14:49.730651  383776 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 06:14:49.752355  383776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:14:49.753550  383776 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 06:14:49.753572  383776 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 06:14:49.771260  383776 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 06:14:49.771286  383776 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 06:14:49.795111  383776 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 06:14:49.795134  383776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 06:14:49.815122  383776 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 06:14:49.815146  383776 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 06:14:49.833608  383776 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 06:14:49.833626  383776 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 06:14:49.850913  383776 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 06:14:49.850937  383776 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 06:14:49.867312  383776 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 06:14:49.867338  383776 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 06:14:49.887613  383776 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:14:49.887685  383776 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 06:14:49.902788  383776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:14:50.864967  383776 node_ready.go:49] node "no-preload-468539" is "Ready"
	I1210 06:14:50.865007  383776 node_ready.go:38] duration metric: took 1.137900152s for node "no-preload-468539" to be "Ready" ...
	I1210 06:14:50.865025  383776 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:14:50.865094  383776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:14:51.467143  383776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.741581778s)
	I1210 06:14:51.467224  383776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.714832034s)
	I1210 06:14:51.467333  383776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.564510264s)
	I1210 06:14:51.467373  383776 api_server.go:72] duration metric: took 1.945611452s to wait for apiserver process to appear ...
	I1210 06:14:51.467387  383776 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:14:51.467406  383776 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1210 06:14:51.471241  383776 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-468539 addons enable metrics-server
	
	I1210 06:14:51.472270  383776 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:14:51.472292  383776 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:14:51.473546  383776 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1210 06:14:47.277326  377144 addons.go:530] duration metric: took 558.194348ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1210 06:14:47.545700  377144 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-125336" context rescaled to 1 replicas
	W1210 06:14:49.047091  377144 node_ready.go:57] node "default-k8s-diff-port-125336" has "Ready":"False" status (will retry)
	W1210 06:14:51.546930  377144 node_ready.go:57] node "default-k8s-diff-port-125336" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 10 06:14:20 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:20.534066543Z" level=info msg="Started container" PID=1744 containerID=7d4237a6b27222f68fcb12e7515cc737804f01bfde3e9d158a0276390ee05e55 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dhsb6/dashboard-metrics-scraper id=5e09033d-058d-4e9f-8315-4b2a9b9c5741 name=/runtime.v1.RuntimeService/StartContainer sandboxID=052f28d6d942525bc4843b7441d2a436489d78396e28047b7257a883300d55da
	Dec 10 06:14:21 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:21.491167342Z" level=info msg="Removing container: 8ef4a9d5ced39d9be850254a90c2b7de6b85411f25a296b8fcc018cbd9858e6e" id=fbf32c77-26aa-40fe-a22a-cb01e70b2416 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:14:21 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:21.501969792Z" level=info msg="Removed container 8ef4a9d5ced39d9be850254a90c2b7de6b85411f25a296b8fcc018cbd9858e6e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dhsb6/dashboard-metrics-scraper" id=fbf32c77-26aa-40fe-a22a-cb01e70b2416 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:14:32 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:32.519785864Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=900be59b-4e3a-474b-905a-ae4e9d080060 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:14:32 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:32.520968363Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=49cba5d2-3ed4-4e5e-b9eb-3c921efb1f1e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:14:32 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:32.521954951Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=feaf59e1-57e1-4ffa-af8a-e01aeae6973b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:14:32 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:32.52209247Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:14:32 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:32.52630982Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:14:32 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:32.526478945Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8c71644baf890610956c58e5ef7646d1f05c7bafc1fc4801098fc60460bc40ed/merged/etc/passwd: no such file or directory"
	Dec 10 06:14:32 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:32.526510205Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8c71644baf890610956c58e5ef7646d1f05c7bafc1fc4801098fc60460bc40ed/merged/etc/group: no such file or directory"
	Dec 10 06:14:32 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:32.526761939Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:14:32 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:32.551380034Z" level=info msg="Created container 39a2f71da71fe7c1de3850b3a9c51ed384745691f28a2032e402aa21b568b232: kube-system/storage-provisioner/storage-provisioner" id=feaf59e1-57e1-4ffa-af8a-e01aeae6973b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:14:32 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:32.551844297Z" level=info msg="Starting container: 39a2f71da71fe7c1de3850b3a9c51ed384745691f28a2032e402aa21b568b232" id=a5df2216-a69f-4b93-9f84-74757d726036 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:14:32 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:32.55351525Z" level=info msg="Started container" PID=1760 containerID=39a2f71da71fe7c1de3850b3a9c51ed384745691f28a2032e402aa21b568b232 description=kube-system/storage-provisioner/storage-provisioner id=a5df2216-a69f-4b93-9f84-74757d726036 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c93072aae323b32ad98323401b19d1257340958ff27d0627b1fa1000cb6a830e
	Dec 10 06:14:35 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:35.380825864Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9a9e3c90-328d-4924-bff8-0e2e778fe853 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:14:35 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:35.381766699Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1dc8211c-9f19-45ca-89cd-7f4f98ac6276 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:14:35 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:35.382614761Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dhsb6/dashboard-metrics-scraper" id=56d1822e-8279-469f-bbc9-815b19a5fe0d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:14:35 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:35.382734551Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:14:35 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:35.387790052Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:14:35 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:35.388246336Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:14:35 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:35.415122184Z" level=info msg="Created container 63f93756ec3e560abf24b3a96a103ac572af56959a7e1630a2df5c20bfe381d7: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dhsb6/dashboard-metrics-scraper" id=56d1822e-8279-469f-bbc9-815b19a5fe0d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:14:35 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:35.41560045Z" level=info msg="Starting container: 63f93756ec3e560abf24b3a96a103ac572af56959a7e1630a2df5c20bfe381d7" id=4c30b1c5-149d-475f-9ea7-c46c6b0ee6e4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:14:35 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:35.417435472Z" level=info msg="Started container" PID=1778 containerID=63f93756ec3e560abf24b3a96a103ac572af56959a7e1630a2df5c20bfe381d7 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dhsb6/dashboard-metrics-scraper id=4c30b1c5-149d-475f-9ea7-c46c6b0ee6e4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=052f28d6d942525bc4843b7441d2a436489d78396e28047b7257a883300d55da
	Dec 10 06:14:35 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:35.533172971Z" level=info msg="Removing container: 7d4237a6b27222f68fcb12e7515cc737804f01bfde3e9d158a0276390ee05e55" id=1090581d-ec72-4be3-9398-a90adc926566 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:14:35 old-k8s-version-725426 crio[560]: time="2025-12-10T06:14:35.543017113Z" level=info msg="Removed container 7d4237a6b27222f68fcb12e7515cc737804f01bfde3e9d158a0276390ee05e55: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dhsb6/dashboard-metrics-scraper" id=1090581d-ec72-4be3-9398-a90adc926566 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	63f93756ec3e5       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           17 seconds ago      Exited              dashboard-metrics-scraper   2                   052f28d6d9425       dashboard-metrics-scraper-5f989dc9cf-dhsb6       kubernetes-dashboard
	39a2f71da71fe       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   c93072aae323b       storage-provisioner                              kube-system
	0b17c77ccaaf0       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   36 seconds ago      Running             kubernetes-dashboard        0                   443646f09ecf1       kubernetes-dashboard-8694d4445c-8jvqp            kubernetes-dashboard
	3ec52d6fd6de5       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           50 seconds ago      Running             coredns                     0                   fd8b94fd03e0e       coredns-5dd5756b68-vxb6d                         kube-system
	cd3165120c2c5       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   b3760044655bb       busybox                                          default
	5d02e309047c0       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   5ef13afa20641       kindnet-5zsjn                                    kube-system
	9e0c7c4b5d625       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   c93072aae323b       storage-provisioner                              kube-system
	5990f9b53cfb7       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           50 seconds ago      Running             kube-proxy                  0                   8e8663b05c653       kube-proxy-m59j8                                 kube-system
	217c2500f89f7       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           54 seconds ago      Running             etcd                        0                   cca76a9a67783       etcd-old-k8s-version-725426                      kube-system
	157dd67e0dd14       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           54 seconds ago      Running             kube-apiserver              0                   edd3e82f6018b       kube-apiserver-old-k8s-version-725426            kube-system
	8f5281037f2c4       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           54 seconds ago      Running             kube-controller-manager     0                   8f1996e01ab0c       kube-controller-manager-old-k8s-version-725426   kube-system
	e4a03ac2f7438       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           54 seconds ago      Running             kube-scheduler              0                   d00e0885a5ce1       kube-scheduler-old-k8s-version-725426            kube-system
	
	
	==> coredns [3ec52d6fd6de5971b9c5c66dabd9b83677b9c3bb23ccec3b078f383aee8c9fbe] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:58017 - 9941 "HINFO IN 3691086577375494435.4746766571204173882. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.06497998s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-725426
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-725426
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602
	                    minikube.k8s.io/name=old-k8s-version-725426
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_12_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:12:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-725426
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:14:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:14:31 +0000   Wed, 10 Dec 2025 06:12:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:14:31 +0000   Wed, 10 Dec 2025 06:12:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:14:31 +0000   Wed, 10 Dec 2025 06:12:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 06:14:31 +0000   Wed, 10 Dec 2025 06:13:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-725426
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                b7d5f572-8473-408c-855f-67c8fb07b4fa
	  Boot ID:                    b1b789e7-29ca-41f0-9541-8c4ef16372aa
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-5dd5756b68-vxb6d                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-old-k8s-version-725426                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         119s
	  kube-system                 kindnet-5zsjn                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-old-k8s-version-725426             250m (3%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-old-k8s-version-725426    200m (2%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-m59j8                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-old-k8s-version-725426             100m (1%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-dhsb6        0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-8jvqp             0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 106s                 kube-proxy       
	  Normal  Starting                 50s                  kube-proxy       
	  Normal  Starting                 2m5s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-725426 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-725426 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-725426 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    119s                 kubelet          Node old-k8s-version-725426 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  119s                 kubelet          Node old-k8s-version-725426 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     119s                 kubelet          Node old-k8s-version-725426 status is now: NodeHasSufficientPID
	  Normal  Starting                 119s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s                 node-controller  Node old-k8s-version-725426 event: Registered Node old-k8s-version-725426 in Controller
	  Normal  NodeReady                94s                  kubelet          Node old-k8s-version-725426 status is now: NodeReady
	  Normal  Starting                 55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)    kubelet          Node old-k8s-version-725426 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)    kubelet          Node old-k8s-version-725426 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)    kubelet          Node old-k8s-version-725426 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           40s                  node-controller  Node old-k8s-version-725426 event: Registered Node old-k8s-version-725426 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e ac 6a 3a 10 14 08 06
	[  +0.000389] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e1 45 1e 59 dc 08 06
	[ +12.231886] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff aa b6 c3 b5 b8 e1 08 06
	[  +0.018522] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff f6 5b 96 ba 91 6c 08 06
	[Dec10 06:13] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 91 ba 36 9a ae 08 06
	[  +0.002987] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 7f a1 c5 f7 73 08 06
	[  +1.205570] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 04 b2 ab d7 49 08 06
	[  +4.623767] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 10 2d 23 5f e6 08 06
	[  +0.000315] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 5b 96 ba 91 6c 08 06
	[ +12.537493] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 fa d0 2a 46 66 08 06
	[  +0.000395] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 16 04 b2 ab d7 49 08 06
	[ +31.413502] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 1b 61 8f e3 57 08 06
	[  +0.000352] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fa 91 ba 36 9a ae 08 06
	
	
	==> etcd [217c2500f89f71d1324ffbf4ed5b1db6ba6968887bda00d70e62b6c6b61b2d9c] <==
	{"level":"info","ts":"2025-12-10T06:13:58.031064Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-10T06:13:58.030435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-10T06:13:58.028895Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-12-10T06:13:58.031741Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-12-10T06:13:58.032519Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-10T06:13:58.032574Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-10T06:13:58.036682Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-10T06:13:58.036922Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-10T06:13:58.036981Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-10T06:13:58.037044Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-10T06:13:58.037071Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-10T06:13:59.217592Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-10T06:13:59.217635Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-10T06:13:59.217652Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-10T06:13:59.217668Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-10T06:13:59.217676Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-10T06:13:59.217688Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-10T06:13:59.217698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-10T06:13:59.219282Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-10T06:13:59.2193Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-10T06:13:59.219293Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-725426 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-10T06:13:59.219525Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-10T06:13:59.219622Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-10T06:13:59.220452Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-10T06:13:59.220571Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 06:14:52 up 57 min,  0 user,  load average: 5.00, 4.45, 2.85
	Linux old-k8s-version-725426 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5d02e309047c074c7ea66ed67d4e89f47b261b4f75b4e00eb2cb3070da54fe1c] <==
	I1210 06:14:02.540847       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 06:14:02.541329       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1210 06:14:02.541505       1 main.go:148] setting mtu 1500 for CNI 
	I1210 06:14:02.541531       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 06:14:02.541544       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T06:14:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 06:14:02.814596       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 06:14:02.814732       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 06:14:02.814747       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 06:14:02.815434       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 06:14:03.219316       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 06:14:03.219353       1 metrics.go:72] Registering metrics
	I1210 06:14:03.219425       1 controller.go:711] "Syncing nftables rules"
	I1210 06:14:12.814279       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 06:14:12.814352       1 main.go:301] handling current node
	I1210 06:14:22.814652       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 06:14:22.814690       1 main.go:301] handling current node
	I1210 06:14:32.814357       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 06:14:32.814386       1 main.go:301] handling current node
	I1210 06:14:42.816281       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 06:14:42.816322       1 main.go:301] handling current node
	
	
	==> kube-apiserver [157dd67e0dd14a9973e3a0ca206bd7d0544b492de0e9c6fb754a5a6046365641] <==
	I1210 06:14:00.396775       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1210 06:14:00.397207       1 aggregator.go:166] initial CRD sync complete...
	I1210 06:14:00.397223       1 autoregister_controller.go:141] Starting autoregister controller
	I1210 06:14:00.397231       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 06:14:00.397241       1 cache.go:39] Caches are synced for autoregister controller
	I1210 06:14:00.397562       1 shared_informer.go:318] Caches are synced for configmaps
	I1210 06:14:00.397621       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1210 06:14:00.397629       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1210 06:14:00.397875       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1210 06:14:00.400421       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1210 06:14:00.400456       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E1210 06:14:00.408642       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1210 06:14:00.427099       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:14:00.462850       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1210 06:14:01.300485       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 06:14:01.368110       1 controller.go:624] quota admission added evaluator for: namespaces
	I1210 06:14:01.418022       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1210 06:14:01.436486       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:14:01.445837       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:14:01.453490       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1210 06:14:01.487436       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.170.212"}
	I1210 06:14:01.498971       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.18.54"}
	I1210 06:14:12.610632       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1210 06:14:12.648581       1 controller.go:624] quota admission added evaluator for: endpoints
	I1210 06:14:12.805431       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [8f5281037f2c44ec8cb539eef6c1a25935bd970e552a1b7795809e847a03d5ca] <==
	I1210 06:14:12.792340       1 event.go:307] "Event occurred" object="old-k8s-version-725426" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-725426 event: Registered Node old-k8s-version-725426 in Controller"
	I1210 06:14:12.792185       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1210 06:14:12.792888       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="old-k8s-version-725426"
	I1210 06:14:12.792994       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1210 06:14:12.793700       1 shared_informer.go:318] Caches are synced for daemon sets
	I1210 06:14:12.795918       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1210 06:14:12.814470       1 shared_informer.go:318] Caches are synced for node
	I1210 06:14:12.814566       1 range_allocator.go:174] "Sending events to api server"
	I1210 06:14:12.814619       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1210 06:14:12.814628       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1210 06:14:12.814637       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1210 06:14:12.843822       1 shared_informer.go:318] Caches are synced for TTL
	I1210 06:14:12.848028       1 shared_informer.go:318] Caches are synced for persistent volume
	I1210 06:14:13.180493       1 shared_informer.go:318] Caches are synced for garbage collector
	I1210 06:14:13.222967       1 shared_informer.go:318] Caches are synced for garbage collector
	I1210 06:14:13.223013       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1210 06:14:16.494195       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="8.307039ms"
	I1210 06:14:16.494333       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="64.479µs"
	I1210 06:14:20.500273       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.293µs"
	I1210 06:14:21.501272       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="126.301µs"
	I1210 06:14:22.507489       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="131.551µs"
	I1210 06:14:33.659544       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.172331ms"
	I1210 06:14:33.659673       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.828µs"
	I1210 06:14:35.543762       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.71µs"
	I1210 06:14:42.943995       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="80.353µs"
	
	
	==> kube-proxy [5990f9b53cfb7195fb05941363f517e271964705fb893f41bcded21a1b4fc06e] <==
	I1210 06:14:02.374362       1 server_others.go:69] "Using iptables proxy"
	I1210 06:14:02.387596       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1210 06:14:02.414068       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:14:02.420850       1 server_others.go:152] "Using iptables Proxier"
	I1210 06:14:02.420898       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1210 06:14:02.420917       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1210 06:14:02.420958       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1210 06:14:02.421272       1 server.go:846] "Version info" version="v1.28.0"
	I1210 06:14:02.421345       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:14:02.422180       1 config.go:315] "Starting node config controller"
	I1210 06:14:02.422255       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1210 06:14:02.422754       1 config.go:188] "Starting service config controller"
	I1210 06:14:02.422781       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1210 06:14:02.422803       1 config.go:97] "Starting endpoint slice config controller"
	I1210 06:14:02.422807       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1210 06:14:02.522692       1 shared_informer.go:318] Caches are synced for node config
	I1210 06:14:02.523772       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1210 06:14:02.523792       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [e4a03ac2f7438f6d74706fce0fe8f58a58512a1dac9e6fcff2e15b6523469282] <==
	I1210 06:13:58.587152       1 serving.go:348] Generated self-signed cert in-memory
	I1210 06:14:00.403215       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1210 06:14:00.403246       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:14:00.410074       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1210 06:14:00.410289       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:14:00.410390       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1210 06:14:00.410234       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1210 06:14:00.411107       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1210 06:14:00.410420       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1210 06:14:00.412661       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1210 06:14:00.412921       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1210 06:14:00.511937       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1210 06:14:00.511940       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1210 06:14:00.511948       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	
	
	==> kubelet <==
	Dec 10 06:14:12 old-k8s-version-725426 kubelet[717]: I1210 06:14:12.632482     717 topology_manager.go:215] "Topology Admit Handler" podUID="0db0eaef-daad-4d60-ad42-c7be8937f192" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-dhsb6"
	Dec 10 06:14:12 old-k8s-version-725426 kubelet[717]: I1210 06:14:12.803618     717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d6be06f9-987c-423e-8476-bd6ee21c0520-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-8jvqp\" (UID: \"d6be06f9-987c-423e-8476-bd6ee21c0520\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-8jvqp"
	Dec 10 06:14:12 old-k8s-version-725426 kubelet[717]: I1210 06:14:12.803689     717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0db0eaef-daad-4d60-ad42-c7be8937f192-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-dhsb6\" (UID: \"0db0eaef-daad-4d60-ad42-c7be8937f192\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dhsb6"
	Dec 10 06:14:12 old-k8s-version-725426 kubelet[717]: I1210 06:14:12.803846     717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phddr\" (UniqueName: \"kubernetes.io/projected/d6be06f9-987c-423e-8476-bd6ee21c0520-kube-api-access-phddr\") pod \"kubernetes-dashboard-8694d4445c-8jvqp\" (UID: \"d6be06f9-987c-423e-8476-bd6ee21c0520\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-8jvqp"
	Dec 10 06:14:12 old-k8s-version-725426 kubelet[717]: I1210 06:14:12.803907     717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvcr2\" (UniqueName: \"kubernetes.io/projected/0db0eaef-daad-4d60-ad42-c7be8937f192-kube-api-access-zvcr2\") pod \"dashboard-metrics-scraper-5f989dc9cf-dhsb6\" (UID: \"0db0eaef-daad-4d60-ad42-c7be8937f192\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dhsb6"
	Dec 10 06:14:16 old-k8s-version-725426 kubelet[717]: I1210 06:14:16.485783     717 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-8jvqp" podStartSLOduration=1.069236846 podCreationTimestamp="2025-12-10 06:14:12 +0000 UTC" firstStartedPulling="2025-12-10 06:14:12.953514723 +0000 UTC m=+15.738490139" lastFinishedPulling="2025-12-10 06:14:16.36998249 +0000 UTC m=+19.154957915" observedRunningTime="2025-12-10 06:14:16.485360445 +0000 UTC m=+19.270335883" watchObservedRunningTime="2025-12-10 06:14:16.485704622 +0000 UTC m=+19.270680060"
	Dec 10 06:14:20 old-k8s-version-725426 kubelet[717]: I1210 06:14:20.485405     717 scope.go:117] "RemoveContainer" containerID="8ef4a9d5ced39d9be850254a90c2b7de6b85411f25a296b8fcc018cbd9858e6e"
	Dec 10 06:14:21 old-k8s-version-725426 kubelet[717]: I1210 06:14:21.489826     717 scope.go:117] "RemoveContainer" containerID="8ef4a9d5ced39d9be850254a90c2b7de6b85411f25a296b8fcc018cbd9858e6e"
	Dec 10 06:14:21 old-k8s-version-725426 kubelet[717]: I1210 06:14:21.490002     717 scope.go:117] "RemoveContainer" containerID="7d4237a6b27222f68fcb12e7515cc737804f01bfde3e9d158a0276390ee05e55"
	Dec 10 06:14:21 old-k8s-version-725426 kubelet[717]: E1210 06:14:21.490422     717 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-dhsb6_kubernetes-dashboard(0db0eaef-daad-4d60-ad42-c7be8937f192)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dhsb6" podUID="0db0eaef-daad-4d60-ad42-c7be8937f192"
	Dec 10 06:14:22 old-k8s-version-725426 kubelet[717]: I1210 06:14:22.494806     717 scope.go:117] "RemoveContainer" containerID="7d4237a6b27222f68fcb12e7515cc737804f01bfde3e9d158a0276390ee05e55"
	Dec 10 06:14:22 old-k8s-version-725426 kubelet[717]: E1210 06:14:22.495230     717 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-dhsb6_kubernetes-dashboard(0db0eaef-daad-4d60-ad42-c7be8937f192)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dhsb6" podUID="0db0eaef-daad-4d60-ad42-c7be8937f192"
	Dec 10 06:14:23 old-k8s-version-725426 kubelet[717]: I1210 06:14:23.496976     717 scope.go:117] "RemoveContainer" containerID="7d4237a6b27222f68fcb12e7515cc737804f01bfde3e9d158a0276390ee05e55"
	Dec 10 06:14:23 old-k8s-version-725426 kubelet[717]: E1210 06:14:23.497359     717 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-dhsb6_kubernetes-dashboard(0db0eaef-daad-4d60-ad42-c7be8937f192)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dhsb6" podUID="0db0eaef-daad-4d60-ad42-c7be8937f192"
	Dec 10 06:14:32 old-k8s-version-725426 kubelet[717]: I1210 06:14:32.519270     717 scope.go:117] "RemoveContainer" containerID="9e0c7c4b5d6256e465920477a0bbc62ffede25d7f1093bad071e1332673719ed"
	Dec 10 06:14:35 old-k8s-version-725426 kubelet[717]: I1210 06:14:35.380279     717 scope.go:117] "RemoveContainer" containerID="7d4237a6b27222f68fcb12e7515cc737804f01bfde3e9d158a0276390ee05e55"
	Dec 10 06:14:35 old-k8s-version-725426 kubelet[717]: I1210 06:14:35.531894     717 scope.go:117] "RemoveContainer" containerID="7d4237a6b27222f68fcb12e7515cc737804f01bfde3e9d158a0276390ee05e55"
	Dec 10 06:14:35 old-k8s-version-725426 kubelet[717]: I1210 06:14:35.532168     717 scope.go:117] "RemoveContainer" containerID="63f93756ec3e560abf24b3a96a103ac572af56959a7e1630a2df5c20bfe381d7"
	Dec 10 06:14:35 old-k8s-version-725426 kubelet[717]: E1210 06:14:35.532566     717 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-dhsb6_kubernetes-dashboard(0db0eaef-daad-4d60-ad42-c7be8937f192)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dhsb6" podUID="0db0eaef-daad-4d60-ad42-c7be8937f192"
	Dec 10 06:14:42 old-k8s-version-725426 kubelet[717]: I1210 06:14:42.934873     717 scope.go:117] "RemoveContainer" containerID="63f93756ec3e560abf24b3a96a103ac572af56959a7e1630a2df5c20bfe381d7"
	Dec 10 06:14:42 old-k8s-version-725426 kubelet[717]: E1210 06:14:42.935170     717 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-dhsb6_kubernetes-dashboard(0db0eaef-daad-4d60-ad42-c7be8937f192)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dhsb6" podUID="0db0eaef-daad-4d60-ad42-c7be8937f192"
	Dec 10 06:14:47 old-k8s-version-725426 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 06:14:47 old-k8s-version-725426 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 06:14:47 old-k8s-version-725426 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:14:47 old-k8s-version-725426 systemd[1]: kubelet.service: Consumed 1.485s CPU time.
	
	
	==> kubernetes-dashboard [0b17c77ccaaf0facf4210e4593ca7afa37dcce6d104183e98e0ff909ab0e54f1] <==
	2025/12/10 06:14:16 Using namespace: kubernetes-dashboard
	2025/12/10 06:14:16 Using in-cluster config to connect to apiserver
	2025/12/10 06:14:16 Using secret token for csrf signing
	2025/12/10 06:14:16 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/10 06:14:16 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/10 06:14:16 Successful initial request to the apiserver, version: v1.28.0
	2025/12/10 06:14:16 Generating JWE encryption key
	2025/12/10 06:14:16 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/10 06:14:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/10 06:14:16 Initializing JWE encryption key from synchronized object
	2025/12/10 06:14:16 Creating in-cluster Sidecar client
	2025/12/10 06:14:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 06:14:16 Serving insecurely on HTTP port: 9090
	2025/12/10 06:14:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 06:14:16 Starting overwatch
	
	
	==> storage-provisioner [39a2f71da71fe7c1de3850b3a9c51ed384745691f28a2032e402aa21b568b232] <==
	I1210 06:14:32.565934       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 06:14:32.572608       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 06:14:32.572642       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1210 06:14:49.973224       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 06:14:49.973404       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-725426_c777854a-ab64-4118-a65e-f1c661b70492!
	I1210 06:14:49.973370       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c2524e9c-7625-4e55-9d2f-d2c7b14c23d5", APIVersion:"v1", ResourceVersion:"664", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-725426_c777854a-ab64-4118-a65e-f1c661b70492 became leader
	I1210 06:14:50.074397       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-725426_c777854a-ab64-4118-a65e-f1c661b70492!
	
	
	==> storage-provisioner [9e0c7c4b5d6256e465920477a0bbc62ffede25d7f1093bad071e1332673719ed] <==
	I1210 06:14:02.343352       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1210 06:14:32.351433       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-725426 -n old-k8s-version-725426
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-725426 -n old-k8s-version-725426: exit status 2 (321.666738ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-725426 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-125336 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-125336 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (292.836838ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:15:13Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-125336 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-125336 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-125336 describe deploy/metrics-server -n kube-system: exit status 1 (68.998995ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-125336 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-125336
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-125336:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2b7aea94b35697845da2f4c16e920629381627ad8fcce3f7bf5029e3a85cdf22",
	        "Created": "2025-12-10T06:14:12.606946513Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 377700,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:14:12.650542595Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/2b7aea94b35697845da2f4c16e920629381627ad8fcce3f7bf5029e3a85cdf22/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2b7aea94b35697845da2f4c16e920629381627ad8fcce3f7bf5029e3a85cdf22/hostname",
	        "HostsPath": "/var/lib/docker/containers/2b7aea94b35697845da2f4c16e920629381627ad8fcce3f7bf5029e3a85cdf22/hosts",
	        "LogPath": "/var/lib/docker/containers/2b7aea94b35697845da2f4c16e920629381627ad8fcce3f7bf5029e3a85cdf22/2b7aea94b35697845da2f4c16e920629381627ad8fcce3f7bf5029e3a85cdf22-json.log",
	        "Name": "/default-k8s-diff-port-125336",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-125336:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-125336",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2b7aea94b35697845da2f4c16e920629381627ad8fcce3f7bf5029e3a85cdf22",
	                "LowerDir": "/var/lib/docker/overlay2/eee672556c7e645ad7270e0982a18173816f8e37df04d4f2836ca903314bd268-init/diff:/var/lib/docker/overlay2/b62e2f8db4877fd6b32453256d2aeab173581bfdfbed6c87a5c3b6dd49dbb983/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eee672556c7e645ad7270e0982a18173816f8e37df04d4f2836ca903314bd268/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eee672556c7e645ad7270e0982a18173816f8e37df04d4f2836ca903314bd268/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eee672556c7e645ad7270e0982a18173816f8e37df04d4f2836ca903314bd268/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-125336",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-125336/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-125336",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-125336",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-125336",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "008007e3d9209bd1765e635670ea794a9f06467096b4f44c17bb9f3889222e10",
	            "SandboxKey": "/var/run/docker/netns/008007e3d920",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-125336": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6dcc364cf8d2e6fffb8ab01503e1fba4cf2ae27c41034eeff5b62eed98af1ff5",
	                    "EndpointID": "298ccbf989cfcaf39d188dc50a2abfc728eed42a12c9f75f804886fc96b860f4",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "16:34:07:77:f9:13",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-125336",
	                        "2b7aea94b356"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-125336 -n default-k8s-diff-port-125336
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-125336 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-125336 logs -n 25: (1.114157167s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ ssh     │ -p bridge-094798 sudo systemctl cat containerd --no-pager                                                                                                                                                                                          │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                   │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo cat /etc/containerd/config.toml                                                                                                                                                                                              │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo containerd config dump                                                                                                                                                                                                       │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo systemctl cat crio --no-pager                                                                                                                                                                                                │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                      │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo crio config                                                                                                                                                                                                                  │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ delete  │ -p bridge-094798                                                                                                                                                                                                                                   │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ delete  │ -p disable-driver-mounts-569732                                                                                                                                                                                                                    │ disable-driver-mounts-569732 │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ start   │ -p default-k8s-diff-port-125336 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable metrics-server -p no-preload-468539 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ stop    │ -p no-preload-468539 --alsologtostderr -v=3                                                                                                                                                                                                        │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ addons  │ enable metrics-server -p embed-certs-028500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ stop    │ -p embed-certs-028500 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ addons  │ enable dashboard -p no-preload-468539 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ start   │ -p no-preload-468539 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ image   │ old-k8s-version-725426 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ pause   │ -p old-k8s-version-725426 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ delete  │ -p old-k8s-version-725426                                                                                                                                                                                                                          │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ delete  │ -p old-k8s-version-725426                                                                                                                                                                                                                          │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ start   │ -p newest-cni-218688 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-028500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ start   │ -p embed-certs-028500 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-125336 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:14:57
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:14:57.244539  389191 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:14:57.244673  389191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:14:57.244688  389191 out.go:374] Setting ErrFile to fd 2...
	I1210 06:14:57.244695  389191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:14:57.245001  389191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 06:14:57.245593  389191 out.go:368] Setting JSON to false
	I1210 06:14:57.247197  389191 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3441,"bootTime":1765343856,"procs":419,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:14:57.247274  389191 start.go:143] virtualization: kvm guest
	I1210 06:14:57.252874  389191 out.go:179] * [embed-certs-028500] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:14:57.255011  389191 notify.go:221] Checking for updates...
	I1210 06:14:57.255717  389191 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:14:57.257330  389191 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:14:57.258824  389191 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:14:57.260271  389191 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube
	I1210 06:14:57.266331  389191 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:14:57.268173  389191 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:14:57.269975  389191 config.go:182] Loaded profile config "embed-certs-028500": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:14:57.270777  389191 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:14:57.300545  389191 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:14:57.300644  389191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:14:57.369334  389191 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:57 OomKillDisable:false NGoroutines:68 SystemTime:2025-12-10 06:14:57.356993167 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:14:57.369491  389191 docker.go:319] overlay module found
	I1210 06:14:57.373368  389191 out.go:179] * Using the docker driver based on existing profile
	I1210 06:14:57.374676  389191 start.go:309] selected driver: docker
	I1210 06:14:57.374705  389191 start.go:927] validating driver "docker" against &{Name:embed-certs-028500 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-028500 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:14:57.374820  389191 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:14:57.375704  389191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:14:57.447850  389191 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:68 SystemTime:2025-12-10 06:14:57.435495137 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:14:57.448243  389191 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:14:57.448287  389191 cni.go:84] Creating CNI manager for ""
	I1210 06:14:57.448362  389191 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:14:57.448422  389191 start.go:353] cluster config:
	{Name:embed-certs-028500 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-028500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:14:57.488130  389191 out.go:179] * Starting "embed-certs-028500" primary control-plane node in "embed-certs-028500" cluster
	I1210 06:14:57.490068  389191 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:14:57.491685  389191 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:14:57.493456  389191 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 06:14:57.493556  389191 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	W1210 06:14:57.519744  389191 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1210 06:14:57.522000  389191 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:14:57.522027  389191 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 06:14:57.607817  389191 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1210 06:14:57.608003  389191 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/embed-certs-028500/config.json ...
	I1210 06:14:57.608405  389191 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:14:57.608738  389191 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:14:57.608848  389191 start.go:360] acquireMachinesLock for embed-certs-028500: {Name:mk1cdfd1ea9c285bf25b2cff0c617487c1b93472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:57.609252  389191 start.go:364] duration metric: took 370.774µs to acquireMachinesLock for "embed-certs-028500"
	I1210 06:14:57.609298  389191 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:14:57.609306  389191 fix.go:54] fixHost starting: 
	I1210 06:14:57.609617  389191 cli_runner.go:164] Run: docker container inspect embed-certs-028500 --format={{.State.Status}}
	I1210 06:14:57.635212  389191 fix.go:112] recreateIfNeeded on embed-certs-028500: state=Stopped err=<nil>
	W1210 06:14:57.635250  389191 fix.go:138] unexpected machine state, will restart: <nil>
	W1210 06:14:56.513637  383776 pod_ready.go:104] pod "coredns-7d764666f9-tnm7t" is not "Ready", error: <nil>
	W1210 06:14:59.014488  383776 pod_ready.go:104] pod "coredns-7d764666f9-tnm7t" is not "Ready", error: <nil>
	W1210 06:14:58.050263  377144 node_ready.go:57] node "default-k8s-diff-port-125336" has "Ready":"False" status (will retry)
	W1210 06:15:00.546648  377144 node_ready.go:57] node "default-k8s-diff-port-125336" has "Ready":"False" status (will retry)
	I1210 06:15:01.552989  377144 node_ready.go:49] node "default-k8s-diff-port-125336" is "Ready"
	I1210 06:15:01.553023  377144 node_ready.go:38] duration metric: took 14.509783894s for node "default-k8s-diff-port-125336" to be "Ready" ...
	I1210 06:15:01.553042  377144 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:15:01.553114  377144 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:15:01.570326  377144 api_server.go:72] duration metric: took 14.851282275s to wait for apiserver process to appear ...
	I1210 06:15:01.570350  377144 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:15:01.570373  377144 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1210 06:15:01.576618  377144 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1210 06:15:01.577871  377144 api_server.go:141] control plane version: v1.34.3
	I1210 06:15:01.577893  377144 api_server.go:131] duration metric: took 7.536897ms to wait for apiserver health ...
	I1210 06:15:01.577912  377144 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:15:01.581618  377144 system_pods.go:59] 8 kube-system pods found
	I1210 06:15:01.581652  377144 system_pods.go:61] "coredns-66bc5c9577-gkk6m" [0b83f27c-1359-488f-bf61-c716f522dfad] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:15:01.581664  377144 system_pods.go:61] "etcd-default-k8s-diff-port-125336" [afbeb479-99ed-44cd-b9c3-cda0c638c270] Running
	I1210 06:15:01.581672  377144 system_pods.go:61] "kindnet-lfds9" [14d4cc08-bd99-41e5-a772-b5197e8b16b6] Running
	I1210 06:15:01.581677  377144 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-125336" [12a3028f-5f91-4217-bff2-527a5c4a0b4d] Running
	I1210 06:15:01.581683  377144 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-125336" [ee445b76-6256-4d08-a12d-c392acecca93] Running
	I1210 06:15:01.581688  377144 system_pods.go:61] "kube-proxy-mw5sp" [94c4f93c-3851-4ed9-ae3b-7900e64abf9f] Running
	I1210 06:15:01.581693  377144 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-125336" [f045b3cd-f095-44a0-9735-47a085eb5d83] Running
	I1210 06:15:01.581699  377144 system_pods.go:61] "storage-provisioner" [d31f981a-faff-40fd-87cd-c2e5b25f8e2a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:15:01.581708  377144 system_pods.go:74] duration metric: took 3.787481ms to wait for pod list to return data ...
	I1210 06:15:01.581717  377144 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:15:01.584444  377144 default_sa.go:45] found service account: "default"
	I1210 06:15:01.584463  377144 default_sa.go:55] duration metric: took 2.740448ms for default service account to be created ...
	I1210 06:15:01.584473  377144 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 06:15:01.587134  377144 system_pods.go:86] 8 kube-system pods found
	I1210 06:15:01.587156  377144 system_pods.go:89] "coredns-66bc5c9577-gkk6m" [0b83f27c-1359-488f-bf61-c716f522dfad] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:15:01.587168  377144 system_pods.go:89] "etcd-default-k8s-diff-port-125336" [afbeb479-99ed-44cd-b9c3-cda0c638c270] Running
	I1210 06:15:01.587176  377144 system_pods.go:89] "kindnet-lfds9" [14d4cc08-bd99-41e5-a772-b5197e8b16b6] Running
	I1210 06:15:01.587182  377144 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-125336" [12a3028f-5f91-4217-bff2-527a5c4a0b4d] Running
	I1210 06:15:01.587188  377144 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-125336" [ee445b76-6256-4d08-a12d-c392acecca93] Running
	I1210 06:15:01.587200  377144 system_pods.go:89] "kube-proxy-mw5sp" [94c4f93c-3851-4ed9-ae3b-7900e64abf9f] Running
	I1210 06:15:01.587206  377144 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-125336" [f045b3cd-f095-44a0-9735-47a085eb5d83] Running
	I1210 06:15:01.587226  377144 system_pods.go:89] "storage-provisioner" [d31f981a-faff-40fd-87cd-c2e5b25f8e2a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:15:01.587250  377144 retry.go:31] will retry after 220.063224ms: missing components: kube-dns
	I1210 06:14:56.986342  388833 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 06:14:56.986752  388833 start.go:159] libmachine.API.Create for "newest-cni-218688" (driver="docker")
	I1210 06:14:56.986797  388833 client.go:173] LocalClient.Create starting
	I1210 06:14:56.986894  388833 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem
	I1210 06:14:56.986932  388833 main.go:143] libmachine: Decoding PEM data...
	I1210 06:14:56.986954  388833 main.go:143] libmachine: Parsing certificate...
	I1210 06:14:56.987031  388833 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem
	I1210 06:14:56.987089  388833 main.go:143] libmachine: Decoding PEM data...
	I1210 06:14:56.987109  388833 main.go:143] libmachine: Parsing certificate...
	I1210 06:14:56.987565  388833 cli_runner.go:164] Run: docker network inspect newest-cni-218688 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 06:14:57.010491  388833 cli_runner.go:211] docker network inspect newest-cni-218688 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 06:14:57.010694  388833 network_create.go:284] running [docker network inspect newest-cni-218688] to gather additional debugging logs...
	I1210 06:14:57.010720  388833 cli_runner.go:164] Run: docker network inspect newest-cni-218688
	W1210 06:14:57.031777  388833 cli_runner.go:211] docker network inspect newest-cni-218688 returned with exit code 1
	I1210 06:14:57.031800  388833 network_create.go:287] error running [docker network inspect newest-cni-218688]: docker network inspect newest-cni-218688: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-218688 not found
	I1210 06:14:57.031809  388833 network_create.go:289] output of [docker network inspect newest-cni-218688]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-218688 not found
	
	** /stderr **
	I1210 06:14:57.031880  388833 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:14:57.053367  388833 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9ebf62c95cf7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d2:a8:ac:6e:16:1a} reservation:<nil>}
	I1210 06:14:57.054400  388833 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ad22705e186e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:52:8a:92:75:2c:7b} reservation:<nil>}
	I1210 06:14:57.055454  388833 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-782a6994f202 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3e:35:84:e8:81:18} reservation:<nil>}
	I1210 06:14:57.056624  388833 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d51550}
	I1210 06:14:57.056657  388833 network_create.go:124] attempt to create docker network newest-cni-218688 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1210 06:14:57.056718  388833 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-218688 newest-cni-218688
	I1210 06:14:57.119628  388833 network_create.go:108] docker network newest-cni-218688 192.168.76.0/24 created
	I1210 06:14:57.119662  388833 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-218688" container
	I1210 06:14:57.119732  388833 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 06:14:57.143498  388833 cli_runner.go:164] Run: docker volume create newest-cni-218688 --label name.minikube.sigs.k8s.io=newest-cni-218688 --label created_by.minikube.sigs.k8s.io=true
	I1210 06:14:57.165716  388833 oci.go:103] Successfully created a docker volume newest-cni-218688
	I1210 06:14:57.165800  388833 cli_runner.go:164] Run: docker run --rm --name newest-cni-218688-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-218688 --entrypoint /usr/bin/test -v newest-cni-218688:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 06:14:57.872793  388833 oci.go:107] Successfully prepared a docker volume newest-cni-218688
	I1210 06:14:57.872864  388833 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1210 06:14:57.872875  388833 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 06:14:57.872938  388833 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22094-5725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-218688:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 06:14:57.636612  389191 out.go:252] * Restarting existing docker container for "embed-certs-028500" ...
	I1210 06:14:57.636691  389191 cli_runner.go:164] Run: docker start embed-certs-028500
	I1210 06:14:57.780632  389191 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:14:57.947579  389191 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:14:57.975221  389191 cli_runner.go:164] Run: docker container inspect embed-certs-028500 --format={{.State.Status}}
	I1210 06:14:58.000753  389191 kic.go:430] container "embed-certs-028500" state is running.
	I1210 06:14:58.001215  389191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-028500
	I1210 06:14:58.025128  389191 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/embed-certs-028500/config.json ...
	I1210 06:14:58.025370  389191 machine.go:94] provisionDockerMachine start ...
	I1210 06:14:58.025441  389191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-028500
	I1210 06:14:58.049145  389191 main.go:143] libmachine: Using SSH client type: native
	I1210 06:14:58.049513  389191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1210 06:14:58.049528  389191 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:14:58.050364  389191 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39310->127.0.0.1:33123: read: connection reset by peer
	I1210 06:14:58.112870  389191 cache.go:107] acquiring lock: {Name:mkc3a95f67321b2fa8faeb966829fb60cf65d25d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:58.112983  389191 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 exists
	I1210 06:14:58.113000  389191 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3" took 135.292µs
	I1210 06:14:58.113016  389191 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 succeeded
	I1210 06:14:58.113037  389191 cache.go:107] acquiring lock: {Name:mkcb073544c2d92de0e0765e38c37b4f4d2ac46b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:58.113031  389191 cache.go:107] acquiring lock: {Name:mkd670cede0997c7eb0e9bd388a82e1cb2741031 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:58.113118  389191 cache.go:107] acquiring lock: {Name:mk4d792f4bac33dc8779d7cc5ff40393c94e0ea4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:58.113158  389191 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:14:58.113167  389191 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 exists
	I1210 06:14:58.113176  389191 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3" took 59.743µs
	I1210 06:14:58.113167  389191 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 160.676µs
	I1210 06:14:58.113184  389191 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 succeeded
	I1210 06:14:58.113186  389191 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:14:58.113202  389191 cache.go:107] acquiring lock: {Name:mk4839690ba979036496a7cee1de2814aaad3bf0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:58.113207  389191 cache.go:107] acquiring lock: {Name:mk796942baeaa838a47daad2be5ca7532234da42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:58.112867  389191 cache.go:107] acquiring lock: {Name:mk0763a50664c56b0862900e71862307cba94d4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:58.113255  389191 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:14:58.113263  389191 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 417.914µs
	I1210 06:14:58.113278  389191 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:14:58.113271  389191 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 exists
	I1210 06:14:58.113288  389191 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3" took 88.465µs
	I1210 06:14:58.113295  389191 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 succeeded
	I1210 06:14:58.113105  389191 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 exists
	I1210 06:14:58.113285  389191 cache.go:107] acquiring lock: {Name:mkdd768341d1a3481ecaec697219b32d4a715834 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:58.113305  389191 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3" took 270.517µs
	I1210 06:14:58.113312  389191 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 succeeded
	I1210 06:14:58.113330  389191 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1210 06:14:58.113337  389191 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 55.111µs
	I1210 06:14:58.113340  389191 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1210 06:14:58.113347  389191 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1210 06:14:58.113350  389191 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 146.007µs
	I1210 06:14:58.113357  389191 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1210 06:14:58.113369  389191 cache.go:87] Successfully saved all images to host disk.
	I1210 06:15:01.188191  389191 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-028500
	
	I1210 06:15:01.188219  389191 ubuntu.go:182] provisioning hostname "embed-certs-028500"
	I1210 06:15:01.188270  389191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-028500
	I1210 06:15:01.207561  389191 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:01.207777  389191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1210 06:15:01.207789  389191 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-028500 && echo "embed-certs-028500" | sudo tee /etc/hostname
	I1210 06:15:01.377128  389191 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-028500
	
	I1210 06:15:01.377211  389191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-028500
	I1210 06:15:01.398849  389191 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:01.399108  389191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1210 06:15:01.399132  389191 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-028500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-028500/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-028500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:15:01.535984  389191 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:15:01.536018  389191 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-5725/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-5725/.minikube}
	I1210 06:15:01.536060  389191 ubuntu.go:190] setting up certificates
	I1210 06:15:01.536106  389191 provision.go:84] configureAuth start
	I1210 06:15:01.536172  389191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-028500
	I1210 06:15:01.561659  389191 provision.go:143] copyHostCerts
	I1210 06:15:01.561742  389191 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem, removing ...
	I1210 06:15:01.561762  389191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem
	I1210 06:15:01.561834  389191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem (1123 bytes)
	I1210 06:15:01.561968  389191 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem, removing ...
	I1210 06:15:01.561982  389191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem
	I1210 06:15:01.562022  389191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem (1679 bytes)
	I1210 06:15:01.562514  389191 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem, removing ...
	I1210 06:15:01.562537  389191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem
	I1210 06:15:01.562588  389191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem (1078 bytes)
	I1210 06:15:01.562716  389191 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem org=jenkins.embed-certs-028500 san=[127.0.0.1 192.168.85.2 embed-certs-028500 localhost minikube]
	I1210 06:15:02.084445  389191 provision.go:177] copyRemoteCerts
	I1210 06:15:02.084526  389191 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:15:02.084586  389191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-028500
	I1210 06:15:02.107807  389191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/embed-certs-028500/id_rsa Username:docker}
	I1210 06:15:02.212977  389191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:15:02.236198  389191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:15:02.258387  389191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 06:15:02.281338  389191 provision.go:87] duration metric: took 745.196481ms to configureAuth
	I1210 06:15:02.281368  389191 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:15:02.281583  389191 config.go:182] Loaded profile config "embed-certs-028500": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:15:02.281692  389191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-028500
	I1210 06:15:02.306737  389191 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:02.306957  389191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1210 06:15:02.306969  389191 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:15:02.915340  389191 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:15:02.915367  389191 machine.go:97] duration metric: took 4.889981384s to provisionDockerMachine
	I1210 06:15:02.915382  389191 start.go:293] postStartSetup for "embed-certs-028500" (driver="docker")
	I1210 06:15:02.915396  389191 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:15:02.915456  389191 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:15:02.915508  389191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-028500
	I1210 06:15:02.937476  389191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/embed-certs-028500/id_rsa Username:docker}
	I1210 06:15:03.043238  389191 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:15:03.047553  389191 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:15:03.047582  389191 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:15:03.047595  389191 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/addons for local assets ...
	I1210 06:15:03.047664  389191 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/files for local assets ...
	I1210 06:15:03.047768  389191 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem -> 92532.pem in /etc/ssl/certs
	I1210 06:15:03.047894  389191 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:15:03.055892  389191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:15:03.077201  389191 start.go:296] duration metric: took 161.803141ms for postStartSetup
	I1210 06:15:03.077285  389191 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:15:03.077339  389191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-028500
	I1210 06:15:03.097852  389191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/embed-certs-028500/id_rsa Username:docker}
	I1210 06:15:03.194550  389191 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:15:03.199729  389191 fix.go:56] duration metric: took 5.590414431s for fixHost
	I1210 06:15:03.199755  389191 start.go:83] releasing machines lock for "embed-certs-028500", held for 5.590466192s
	I1210 06:15:03.199824  389191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-028500
	I1210 06:15:03.217598  389191 ssh_runner.go:195] Run: cat /version.json
	I1210 06:15:03.217650  389191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-028500
	I1210 06:15:03.217691  389191 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:15:03.217775  389191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-028500
	I1210 06:15:03.235590  389191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/embed-certs-028500/id_rsa Username:docker}
	I1210 06:15:03.236696  389191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/embed-certs-028500/id_rsa Username:docker}
	I1210 06:15:03.326722  389191 ssh_runner.go:195] Run: systemctl --version
	I1210 06:15:03.383536  389191 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:15:03.417425  389191 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:15:03.421757  389191 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:15:03.421822  389191 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:15:03.430311  389191 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:15:03.430331  389191 start.go:496] detecting cgroup driver to use...
	I1210 06:15:03.430361  389191 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:15:03.430406  389191 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:15:03.444196  389191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:15:03.455486  389191 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:15:03.455524  389191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:15:03.468870  389191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:15:03.480337  389191 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:15:03.561138  389191 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:15:03.644816  389191 docker.go:234] disabling docker service ...
	I1210 06:15:03.644891  389191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:15:03.658552  389191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:15:03.670798  389191 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:15:03.759208  389191 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:15:03.844591  389191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:15:03.857559  389191 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:15:03.871674  389191 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:04.005035  389191 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:15:04.005112  389191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:04.015471  389191 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:15:04.015537  389191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:04.024208  389191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:04.032265  389191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:04.040744  389191 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:15:04.049019  389191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:04.058203  389191 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:04.066434  389191 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:04.074629  389191 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:15:04.081503  389191 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:15:04.088535  389191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:15:04.175868  389191 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:15:04.318209  389191 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:15:04.318273  389191 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:15:04.322046  389191 start.go:564] Will wait 60s for crictl version
	I1210 06:15:04.322135  389191 ssh_runner.go:195] Run: which crictl
	I1210 06:15:04.325555  389191 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:15:04.350000  389191 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:15:04.350072  389191 ssh_runner.go:195] Run: crio --version
	I1210 06:15:04.384274  389191 ssh_runner.go:195] Run: crio --version
	I1210 06:15:04.413587  389191 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1210 06:15:01.813507  377144 system_pods.go:86] 8 kube-system pods found
	I1210 06:15:01.813545  377144 system_pods.go:89] "coredns-66bc5c9577-gkk6m" [0b83f27c-1359-488f-bf61-c716f522dfad] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:15:01.813554  377144 system_pods.go:89] "etcd-default-k8s-diff-port-125336" [afbeb479-99ed-44cd-b9c3-cda0c638c270] Running
	I1210 06:15:01.813562  377144 system_pods.go:89] "kindnet-lfds9" [14d4cc08-bd99-41e5-a772-b5197e8b16b6] Running
	I1210 06:15:01.813569  377144 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-125336" [12a3028f-5f91-4217-bff2-527a5c4a0b4d] Running
	I1210 06:15:01.813575  377144 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-125336" [ee445b76-6256-4d08-a12d-c392acecca93] Running
	I1210 06:15:01.813580  377144 system_pods.go:89] "kube-proxy-mw5sp" [94c4f93c-3851-4ed9-ae3b-7900e64abf9f] Running
	I1210 06:15:01.813586  377144 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-125336" [f045b3cd-f095-44a0-9735-47a085eb5d83] Running
	I1210 06:15:01.813593  377144 system_pods.go:89] "storage-provisioner" [d31f981a-faff-40fd-87cd-c2e5b25f8e2a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:15:01.813610  377144 retry.go:31] will retry after 267.505742ms: missing components: kube-dns
	I1210 06:15:02.087578  377144 system_pods.go:86] 8 kube-system pods found
	I1210 06:15:02.087615  377144 system_pods.go:89] "coredns-66bc5c9577-gkk6m" [0b83f27c-1359-488f-bf61-c716f522dfad] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:15:02.087622  377144 system_pods.go:89] "etcd-default-k8s-diff-port-125336" [afbeb479-99ed-44cd-b9c3-cda0c638c270] Running
	I1210 06:15:02.087630  377144 system_pods.go:89] "kindnet-lfds9" [14d4cc08-bd99-41e5-a772-b5197e8b16b6] Running
	I1210 06:15:02.087636  377144 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-125336" [12a3028f-5f91-4217-bff2-527a5c4a0b4d] Running
	I1210 06:15:02.087641  377144 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-125336" [ee445b76-6256-4d08-a12d-c392acecca93] Running
	I1210 06:15:02.087647  377144 system_pods.go:89] "kube-proxy-mw5sp" [94c4f93c-3851-4ed9-ae3b-7900e64abf9f] Running
	I1210 06:15:02.087652  377144 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-125336" [f045b3cd-f095-44a0-9735-47a085eb5d83] Running
	I1210 06:15:02.087659  377144 system_pods.go:89] "storage-provisioner" [d31f981a-faff-40fd-87cd-c2e5b25f8e2a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:15:02.087681  377144 retry.go:31] will retry after 478.628156ms: missing components: kube-dns
	I1210 06:15:02.573126  377144 system_pods.go:86] 8 kube-system pods found
	I1210 06:15:02.573163  377144 system_pods.go:89] "coredns-66bc5c9577-gkk6m" [0b83f27c-1359-488f-bf61-c716f522dfad] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:15:02.573171  377144 system_pods.go:89] "etcd-default-k8s-diff-port-125336" [afbeb479-99ed-44cd-b9c3-cda0c638c270] Running
	I1210 06:15:02.573180  377144 system_pods.go:89] "kindnet-lfds9" [14d4cc08-bd99-41e5-a772-b5197e8b16b6] Running
	I1210 06:15:02.573186  377144 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-125336" [12a3028f-5f91-4217-bff2-527a5c4a0b4d] Running
	I1210 06:15:02.573192  377144 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-125336" [ee445b76-6256-4d08-a12d-c392acecca93] Running
	I1210 06:15:02.573198  377144 system_pods.go:89] "kube-proxy-mw5sp" [94c4f93c-3851-4ed9-ae3b-7900e64abf9f] Running
	I1210 06:15:02.573203  377144 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-125336" [f045b3cd-f095-44a0-9735-47a085eb5d83] Running
	I1210 06:15:02.573211  377144 system_pods.go:89] "storage-provisioner" [d31f981a-faff-40fd-87cd-c2e5b25f8e2a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:15:02.573229  377144 retry.go:31] will retry after 580.697416ms: missing components: kube-dns
	I1210 06:15:03.157505  377144 system_pods.go:86] 8 kube-system pods found
	I1210 06:15:03.157531  377144 system_pods.go:89] "coredns-66bc5c9577-gkk6m" [0b83f27c-1359-488f-bf61-c716f522dfad] Running
	I1210 06:15:03.157543  377144 system_pods.go:89] "etcd-default-k8s-diff-port-125336" [afbeb479-99ed-44cd-b9c3-cda0c638c270] Running
	I1210 06:15:03.157547  377144 system_pods.go:89] "kindnet-lfds9" [14d4cc08-bd99-41e5-a772-b5197e8b16b6] Running
	I1210 06:15:03.157551  377144 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-125336" [12a3028f-5f91-4217-bff2-527a5c4a0b4d] Running
	I1210 06:15:03.157554  377144 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-125336" [ee445b76-6256-4d08-a12d-c392acecca93] Running
	I1210 06:15:03.157557  377144 system_pods.go:89] "kube-proxy-mw5sp" [94c4f93c-3851-4ed9-ae3b-7900e64abf9f] Running
	I1210 06:15:03.157562  377144 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-125336" [f045b3cd-f095-44a0-9735-47a085eb5d83] Running
	I1210 06:15:03.157565  377144 system_pods.go:89] "storage-provisioner" [d31f981a-faff-40fd-87cd-c2e5b25f8e2a] Running
	I1210 06:15:03.157572  377144 system_pods.go:126] duration metric: took 1.573093393s to wait for k8s-apps to be running ...
	I1210 06:15:03.157583  377144 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 06:15:03.157617  377144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:15:03.170633  377144 system_svc.go:56] duration metric: took 13.042071ms WaitForService to wait for kubelet
	I1210 06:15:03.170659  377144 kubeadm.go:587] duration metric: took 16.451621166s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:15:03.170679  377144 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:15:03.173392  377144 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:15:03.173416  377144 node_conditions.go:123] node cpu capacity is 8
	I1210 06:15:03.173437  377144 node_conditions.go:105] duration metric: took 2.752307ms to run NodePressure ...
	I1210 06:15:03.173453  377144 start.go:242] waiting for startup goroutines ...
	I1210 06:15:03.173467  377144 start.go:247] waiting for cluster config update ...
	I1210 06:15:03.173484  377144 start.go:256] writing updated cluster config ...
	I1210 06:15:03.173708  377144 ssh_runner.go:195] Run: rm -f paused
	I1210 06:15:03.177585  377144 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:15:03.180811  377144 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gkk6m" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:03.184797  377144 pod_ready.go:94] pod "coredns-66bc5c9577-gkk6m" is "Ready"
	I1210 06:15:03.184817  377144 pod_ready.go:86] duration metric: took 3.988409ms for pod "coredns-66bc5c9577-gkk6m" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:03.186688  377144 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-125336" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:03.190479  377144 pod_ready.go:94] pod "etcd-default-k8s-diff-port-125336" is "Ready"
	I1210 06:15:03.190499  377144 pod_ready.go:86] duration metric: took 3.796111ms for pod "etcd-default-k8s-diff-port-125336" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:03.192350  377144 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-125336" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:03.196047  377144 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-125336" is "Ready"
	I1210 06:15:03.196066  377144 pod_ready.go:86] duration metric: took 3.6949ms for pod "kube-apiserver-default-k8s-diff-port-125336" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:03.197918  377144 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-125336" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:03.581747  377144 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-125336" is "Ready"
	I1210 06:15:03.581771  377144 pod_ready.go:86] duration metric: took 383.835581ms for pod "kube-controller-manager-default-k8s-diff-port-125336" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:03.781884  377144 pod_ready.go:83] waiting for pod "kube-proxy-mw5sp" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:04.182572  377144 pod_ready.go:94] pod "kube-proxy-mw5sp" is "Ready"
	I1210 06:15:04.182595  377144 pod_ready.go:86] duration metric: took 400.6856ms for pod "kube-proxy-mw5sp" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:04.382339  377144 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-125336" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:04.781400  377144 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-125336" is "Ready"
	I1210 06:15:04.781429  377144 pod_ready.go:86] duration metric: took 399.064273ms for pod "kube-scheduler-default-k8s-diff-port-125336" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:04.781443  377144 pod_ready.go:40] duration metric: took 1.603830719s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:15:04.824049  377144 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1210 06:15:04.826123  377144 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-125336" cluster and "default" namespace by default
	I1210 06:15:01.774377  388833 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22094-5725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-218688:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.901376892s)
	I1210 06:15:01.774418  388833 kic.go:203] duration metric: took 3.901537573s to extract preloaded images to volume ...
	W1210 06:15:01.774508  388833 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1210 06:15:01.774557  388833 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1210 06:15:01.774606  388833 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 06:15:01.855535  388833 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-218688 --name newest-cni-218688 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-218688 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-218688 --network newest-cni-218688 --ip 192.168.76.2 --volume newest-cni-218688:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 06:15:02.202973  388833 cli_runner.go:164] Run: docker container inspect newest-cni-218688 --format={{.State.Running}}
	I1210 06:15:02.227253  388833 cli_runner.go:164] Run: docker container inspect newest-cni-218688 --format={{.State.Status}}
	I1210 06:15:02.250326  388833 cli_runner.go:164] Run: docker exec newest-cni-218688 stat /var/lib/dpkg/alternatives/iptables
	I1210 06:15:02.306306  388833 oci.go:144] the created container "newest-cni-218688" has a running status.
	I1210 06:15:02.306350  388833 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa...
	I1210 06:15:02.429540  388833 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 06:15:02.461974  388833 cli_runner.go:164] Run: docker container inspect newest-cni-218688 --format={{.State.Status}}
	I1210 06:15:02.486892  388833 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 06:15:02.486911  388833 kic_runner.go:114] Args: [docker exec --privileged newest-cni-218688 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 06:15:02.542227  388833 cli_runner.go:164] Run: docker container inspect newest-cni-218688 --format={{.State.Status}}
	I1210 06:15:02.571238  388833 machine.go:94] provisionDockerMachine start ...
	I1210 06:15:02.571403  388833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:02.598485  388833 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:02.598828  388833 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1210 06:15:02.598849  388833 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:15:02.744672  388833 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-218688
	
	I1210 06:15:02.744725  388833 ubuntu.go:182] provisioning hostname "newest-cni-218688"
	I1210 06:15:02.744795  388833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:02.763519  388833 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:02.763851  388833 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1210 06:15:02.763868  388833 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-218688 && echo "newest-cni-218688" | sudo tee /etc/hostname
	I1210 06:15:02.922141  388833 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-218688
	
	I1210 06:15:02.922238  388833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:02.944060  388833 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:02.944382  388833 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1210 06:15:02.944425  388833 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-218688' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-218688/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-218688' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:15:03.084245  388833 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:15:03.084286  388833 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-5725/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-5725/.minikube}
	I1210 06:15:03.084312  388833 ubuntu.go:190] setting up certificates
	I1210 06:15:03.084325  388833 provision.go:84] configureAuth start
	I1210 06:15:03.084408  388833 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-218688
	I1210 06:15:03.104053  388833 provision.go:143] copyHostCerts
	I1210 06:15:03.104182  388833 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem, removing ...
	I1210 06:15:03.104194  388833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem
	I1210 06:15:03.104263  388833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem (1679 bytes)
	I1210 06:15:03.104384  388833 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem, removing ...
	I1210 06:15:03.104396  388833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem
	I1210 06:15:03.104438  388833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem (1078 bytes)
	I1210 06:15:03.104538  388833 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem, removing ...
	I1210 06:15:03.104550  388833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem
	I1210 06:15:03.104594  388833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem (1123 bytes)
	I1210 06:15:03.104759  388833 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem org=jenkins.newest-cni-218688 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-218688]
	I1210 06:15:03.165746  388833 provision.go:177] copyRemoteCerts
	I1210 06:15:03.165794  388833 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:15:03.165834  388833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:03.185008  388833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:03.285811  388833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:15:03.304179  388833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:15:03.320430  388833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 06:15:03.337295  388833 provision.go:87] duration metric: took 252.946383ms to configureAuth
	I1210 06:15:03.337316  388833 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:15:03.337491  388833 config.go:182] Loaded profile config "newest-cni-218688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:15:03.337578  388833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:03.356119  388833 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:03.356311  388833 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1210 06:15:03.356332  388833 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:15:03.628161  388833 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:15:03.628184  388833 machine.go:97] duration metric: took 1.056870819s to provisionDockerMachine
	I1210 06:15:03.628194  388833 client.go:176] duration metric: took 6.641388389s to LocalClient.Create
	I1210 06:15:03.628213  388833 start.go:167] duration metric: took 6.641463566s to libmachine.API.Create "newest-cni-218688"
	I1210 06:15:03.628219  388833 start.go:293] postStartSetup for "newest-cni-218688" (driver="docker")
	I1210 06:15:03.628231  388833 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:15:03.628294  388833 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:15:03.628335  388833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:03.649310  388833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:03.755171  388833 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:15:03.758919  388833 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:15:03.758945  388833 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:15:03.758960  388833 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/addons for local assets ...
	I1210 06:15:03.759010  388833 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/files for local assets ...
	I1210 06:15:03.759117  388833 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem -> 92532.pem in /etc/ssl/certs
	I1210 06:15:03.759249  388833 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:15:03.766797  388833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:15:03.789487  388833 start.go:296] duration metric: took 161.255283ms for postStartSetup
	I1210 06:15:03.789902  388833 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-218688
	I1210 06:15:03.810321  388833 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/config.json ...
	I1210 06:15:03.810624  388833 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:15:03.810669  388833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:03.827235  388833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:03.920691  388833 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:15:03.925443  388833 start.go:128] duration metric: took 6.940841686s to createHost
	I1210 06:15:03.925465  388833 start.go:83] releasing machines lock for "newest-cni-218688", held for 6.940986965s
	I1210 06:15:03.925538  388833 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-218688
	I1210 06:15:03.943157  388833 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:15:03.943226  388833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:03.943161  388833 ssh_runner.go:195] Run: cat /version.json
	I1210 06:15:03.943295  388833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:03.962106  388833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:03.962256  388833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:04.110257  388833 ssh_runner.go:195] Run: systemctl --version
	I1210 06:15:04.116480  388833 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:15:04.155253  388833 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:15:04.159715  388833 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:15:04.159781  388833 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:15:04.185182  388833 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 06:15:04.185202  388833 start.go:496] detecting cgroup driver to use...
	I1210 06:15:04.185233  388833 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:15:04.185285  388833 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:15:04.204011  388833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:15:04.215519  388833 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:15:04.215578  388833 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:15:04.232898  388833 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:15:04.250071  388833 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:15:04.332823  388833 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:15:04.424819  388833 docker.go:234] disabling docker service ...
	I1210 06:15:04.424881  388833 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:15:04.443819  388833 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:15:04.456381  388833 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:15:04.543676  388833 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:15:04.624401  388833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:15:04.637141  388833 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:15:04.651674  388833 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:04.788900  388833 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:15:04.788963  388833 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:04.800116  388833 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:15:04.800180  388833 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:04.808891  388833 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:04.817843  388833 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:04.826902  388833 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:15:04.835259  388833 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:04.847378  388833 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:04.863021  388833 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:04.872831  388833 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:15:04.881896  388833 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:15:04.889969  388833 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:15:04.983338  388833 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:15:05.129757  388833 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:15:05.129815  388833 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:15:05.134191  388833 start.go:564] Will wait 60s for crictl version
	I1210 06:15:05.134242  388833 ssh_runner.go:195] Run: which crictl
	I1210 06:15:05.138815  388833 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:15:05.165685  388833 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:15:05.165780  388833 ssh_runner.go:195] Run: crio --version
	I1210 06:15:05.201406  388833 ssh_runner.go:195] Run: crio --version
	I1210 06:15:05.236027  388833 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1210 06:15:05.237116  388833 cli_runner.go:164] Run: docker network inspect newest-cni-218688 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:15:05.254613  388833 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 06:15:05.258586  388833 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:15:05.270410  388833 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 06:15:04.414620  389191 cli_runner.go:164] Run: docker network inspect embed-certs-028500 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:15:04.432200  389191 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1210 06:15:04.436064  389191 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:15:04.446641  389191 kubeadm.go:884] updating cluster {Name:embed-certs-028500 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-028500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:15:04.446840  389191 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:04.588043  389191 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:04.719419  389191 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:04.848978  389191 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 06:15:04.849031  389191 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:15:04.884668  389191 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:15:04.884691  389191 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:15:04.884712  389191 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.3 crio true true} ...
	I1210 06:15:04.884830  389191 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-028500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:embed-certs-028500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:15:04.884901  389191 ssh_runner.go:195] Run: crio config
	I1210 06:15:04.946387  389191 cni.go:84] Creating CNI manager for ""
	I1210 06:15:04.946417  389191 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:15:04.946435  389191 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:15:04.946467  389191 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-028500 NodeName:embed-certs-028500 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:15:04.946650  389191 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-028500"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:15:04.946731  389191 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1210 06:15:04.954905  389191 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:15:04.954966  389191 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:15:04.962457  389191 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1210 06:15:04.975335  389191 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 06:15:04.990854  389191 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1210 06:15:05.006024  389191 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:15:05.009959  389191 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:15:05.020134  389191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:15:05.100962  389191 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:15:05.121314  389191 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/embed-certs-028500 for IP: 192.168.85.2
	I1210 06:15:05.121332  389191 certs.go:195] generating shared ca certs ...
	I1210 06:15:05.121347  389191 certs.go:227] acquiring lock for ca certs: {Name:mka90f54d579d39a8508aa46a6cef002ccad5d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:05.121474  389191 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key
	I1210 06:15:05.121523  389191 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key
	I1210 06:15:05.121539  389191 certs.go:257] generating profile certs ...
	I1210 06:15:05.121619  389191 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/embed-certs-028500/client.key
	I1210 06:15:05.121671  389191 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/embed-certs-028500/apiserver.key.486bf2a6
	I1210 06:15:05.121705  389191 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/embed-certs-028500/proxy-client.key
	I1210 06:15:05.121809  389191 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253.pem (1338 bytes)
	W1210 06:15:05.121841  389191 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253_empty.pem, impossibly tiny 0 bytes
	I1210 06:15:05.121850  389191 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:15:05.121875  389191 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:15:05.121900  389191 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:15:05.121923  389191 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem (1679 bytes)
	I1210 06:15:05.121963  389191 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:15:05.122577  389191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:15:05.141596  389191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:15:05.160914  389191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:15:05.181308  389191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:15:05.208001  389191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/embed-certs-028500/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1210 06:15:05.227185  389191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/embed-certs-028500/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:15:05.245694  389191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/embed-certs-028500/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:15:05.264158  389191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/embed-certs-028500/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 06:15:05.280978  389191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253.pem --> /usr/share/ca-certificates/9253.pem (1338 bytes)
	I1210 06:15:05.299369  389191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /usr/share/ca-certificates/92532.pem (1708 bytes)
	I1210 06:15:05.320458  389191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:15:05.338793  389191 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:15:05.351139  389191 ssh_runner.go:195] Run: openssl version
	I1210 06:15:05.357219  389191 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:05.364534  389191 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:15:05.371719  389191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:05.375174  389191 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:05.375226  389191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:05.410357  389191 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:15:05.417640  389191 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9253.pem
	I1210 06:15:05.425128  389191 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9253.pem /etc/ssl/certs/9253.pem
	I1210 06:15:05.433281  389191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9253.pem
	I1210 06:15:05.437140  389191 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:37 /usr/share/ca-certificates/9253.pem
	I1210 06:15:05.437189  389191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9253.pem
	I1210 06:15:05.473390  389191 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:15:05.480874  389191 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/92532.pem
	I1210 06:15:05.488264  389191 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/92532.pem /etc/ssl/certs/92532.pem
	I1210 06:15:05.495621  389191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92532.pem
	I1210 06:15:05.499112  389191 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:37 /usr/share/ca-certificates/92532.pem
	I1210 06:15:05.499150  389191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92532.pem
	I1210 06:15:05.535470  389191 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:15:05.542508  389191 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:15:05.545871  389191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:15:05.584122  389191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:15:05.620442  389191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:15:05.664709  389191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:15:05.714954  389191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:15:05.771180  389191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:15:05.826933  389191 kubeadm.go:401] StartCluster: {Name:embed-certs-028500 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-028500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:15:05.827043  389191 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:15:05.827162  389191 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:15:05.865208  389191 cri.go:89] found id: "8cb47732447e77b684b839f080aeb3be30b5387c9465db5c1669dcfea49925dd"
	I1210 06:15:05.865233  389191 cri.go:89] found id: "9448aac68883a9dd13bef51e8981f7e636bdfe00fb0ac6083393a0705758776b"
	I1210 06:15:05.865248  389191 cri.go:89] found id: "f02f944bc389eec54d2261f9fd7c4019496559a482a7c7606927c07257c7d803"
	I1210 06:15:05.865255  389191 cri.go:89] found id: "6ef9ca2b457b0540ee957485c2781b7054801e8cedcfebc48356c9df7479410e"
	I1210 06:15:05.865259  389191 cri.go:89] found id: ""
	I1210 06:15:05.865302  389191 ssh_runner.go:195] Run: sudo runc list -f json
	W1210 06:15:05.882734  389191 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:15:05Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:15:05.882826  389191 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:15:05.893263  389191 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:15:05.893280  389191 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:15:05.893336  389191 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:15:05.902726  389191 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:15:05.903775  389191 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-028500" does not appear in /home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:15:05.904283  389191 kubeconfig.go:62] /home/jenkins/minikube-integration/22094-5725/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-028500" cluster setting kubeconfig missing "embed-certs-028500" context setting]
	I1210 06:15:05.905140  389191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/kubeconfig: {Name:mkfa60e97179780abb80cddf99075aed36301884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:05.907660  389191 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:15:05.917738  389191 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1210 06:15:05.917767  389191 kubeadm.go:602] duration metric: took 24.481301ms to restartPrimaryControlPlane
	I1210 06:15:05.917776  389191 kubeadm.go:403] duration metric: took 90.8536ms to StartCluster
	I1210 06:15:05.917793  389191 settings.go:142] acquiring lock: {Name:mk8c38e27b37253ca8cb2a2adf6342f0db270902 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:05.917851  389191 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:15:05.920418  389191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/kubeconfig: {Name:mkfa60e97179780abb80cddf99075aed36301884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:05.920638  389191 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:15:05.920968  389191 config.go:182] Loaded profile config "embed-certs-028500": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:15:05.921014  389191 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:15:05.921134  389191 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-028500"
	I1210 06:15:05.921154  389191 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-028500"
	W1210 06:15:05.921162  389191 addons.go:248] addon storage-provisioner should already be in state true
	I1210 06:15:05.921185  389191 host.go:66] Checking if "embed-certs-028500" exists ...
	I1210 06:15:05.921643  389191 cli_runner.go:164] Run: docker container inspect embed-certs-028500 --format={{.State.Status}}
	I1210 06:15:05.921987  389191 addons.go:70] Setting dashboard=true in profile "embed-certs-028500"
	I1210 06:15:05.922005  389191 addons.go:239] Setting addon dashboard=true in "embed-certs-028500"
	I1210 06:15:05.922002  389191 addons.go:70] Setting default-storageclass=true in profile "embed-certs-028500"
	W1210 06:15:05.922013  389191 addons.go:248] addon dashboard should already be in state true
	I1210 06:15:05.922037  389191 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-028500"
	I1210 06:15:05.922040  389191 host.go:66] Checking if "embed-certs-028500" exists ...
	I1210 06:15:05.922592  389191 cli_runner.go:164] Run: docker container inspect embed-certs-028500 --format={{.State.Status}}
	I1210 06:15:05.922604  389191 cli_runner.go:164] Run: docker container inspect embed-certs-028500 --format={{.State.Status}}
	I1210 06:15:05.923751  389191 out.go:179] * Verifying Kubernetes components...
	I1210 06:15:05.925356  389191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:15:05.949696  389191 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:15:05.951394  389191 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:15:05.951415  389191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:15:05.951476  389191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-028500
	I1210 06:15:05.956475  389191 addons.go:239] Setting addon default-storageclass=true in "embed-certs-028500"
	W1210 06:15:05.956721  389191 addons.go:248] addon default-storageclass should already be in state true
	I1210 06:15:05.956760  389191 host.go:66] Checking if "embed-certs-028500" exists ...
	I1210 06:15:05.957471  389191 cli_runner.go:164] Run: docker container inspect embed-certs-028500 --format={{.State.Status}}
	I1210 06:15:05.958852  389191 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 06:15:05.960610  389191 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1210 06:15:01.523201  383776 pod_ready.go:104] pod "coredns-7d764666f9-tnm7t" is not "Ready", error: <nil>
	W1210 06:15:04.013532  383776 pod_ready.go:104] pod "coredns-7d764666f9-tnm7t" is not "Ready", error: <nil>
	W1210 06:15:06.018128  383776 pod_ready.go:104] pod "coredns-7d764666f9-tnm7t" is not "Ready", error: <nil>
	I1210 06:15:05.271286  388833 kubeadm.go:884] updating cluster {Name:newest-cni-218688 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-218688 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:15:05.271475  388833 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:05.418692  388833 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:05.554875  388833 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:05.687954  388833 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1210 06:15:05.688074  388833 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:05.855972  388833 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:06.030326  388833 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:06.187905  388833 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:15:06.229599  388833 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:15:06.229624  388833 crio.go:433] Images already preloaded, skipping extraction
	I1210 06:15:06.229674  388833 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:15:06.260864  388833 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:15:06.260884  388833 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:15:06.260894  388833 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 crio true true} ...
	I1210 06:15:06.261000  388833 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-218688 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-218688 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:15:06.261100  388833 ssh_runner.go:195] Run: crio config
	I1210 06:15:06.316028  388833 cni.go:84] Creating CNI manager for ""
	I1210 06:15:06.316052  388833 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:15:06.316074  388833 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1210 06:15:06.316129  388833 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-218688 NodeName:newest-cni-218688 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:15:06.316337  388833 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-218688"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:15:06.316418  388833 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 06:15:06.326026  388833 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:15:06.326128  388833 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:15:06.335267  388833 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1210 06:15:06.348991  388833 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 06:15:06.363704  388833 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1210 06:15:06.376224  388833 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:15:06.379920  388833 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:15:06.390442  388833 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:15:06.472469  388833 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:15:06.503559  388833 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688 for IP: 192.168.76.2
	I1210 06:15:06.503579  388833 certs.go:195] generating shared ca certs ...
	I1210 06:15:06.503599  388833 certs.go:227] acquiring lock for ca certs: {Name:mka90f54d579d39a8508aa46a6cef002ccad5d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:06.503752  388833 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key
	I1210 06:15:06.503814  388833 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key
	I1210 06:15:06.503826  388833 certs.go:257] generating profile certs ...
	I1210 06:15:06.503889  388833 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/client.key
	I1210 06:15:06.503905  388833 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/client.crt with IP's: []
	I1210 06:15:06.636967  388833 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/client.crt ...
	I1210 06:15:06.637060  388833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/client.crt: {Name:mk7be03596a45014268417f3b356393146a5f5d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:06.637284  388833 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/client.key ...
	I1210 06:15:06.637303  388833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/client.key: {Name:mk4d41261b8fc725ec99540fa8b493975695bbad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:06.637467  388833 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/apiserver.key.52c83bcc
	I1210 06:15:06.637487  388833 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/apiserver.crt.52c83bcc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1210 06:15:05.962837  389191 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 06:15:05.962855  389191 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 06:15:05.962930  389191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-028500
	I1210 06:15:05.989238  389191 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:15:05.989307  389191 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:15:05.989396  389191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-028500
	I1210 06:15:05.991151  389191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/embed-certs-028500/id_rsa Username:docker}
	I1210 06:15:06.009953  389191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/embed-certs-028500/id_rsa Username:docker}
	I1210 06:15:06.021282  389191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/embed-certs-028500/id_rsa Username:docker}
	I1210 06:15:06.089494  389191 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:15:06.103056  389191 node_ready.go:35] waiting up to 6m0s for node "embed-certs-028500" to be "Ready" ...
	I1210 06:15:06.115500  389191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:15:06.131670  389191 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 06:15:06.131694  389191 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 06:15:06.140776  389191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:15:06.147616  389191 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 06:15:06.147632  389191 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 06:15:06.165502  389191 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 06:15:06.165523  389191 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 06:15:06.184209  389191 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 06:15:06.184227  389191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 06:15:06.201577  389191 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 06:15:06.201643  389191 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 06:15:06.217511  389191 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 06:15:06.217544  389191 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 06:15:06.234339  389191 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 06:15:06.234363  389191 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 06:15:06.248184  389191 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 06:15:06.248201  389191 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 06:15:06.263557  389191 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:15:06.263581  389191 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 06:15:06.277717  389191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:15:06.721124  388833 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/apiserver.crt.52c83bcc ...
	I1210 06:15:06.721199  388833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/apiserver.crt.52c83bcc: {Name:mk9639b0fc481e59a3e06f126f056005d1389ede Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:06.721403  388833 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/apiserver.key.52c83bcc ...
	I1210 06:15:06.721428  388833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/apiserver.key.52c83bcc: {Name:mka4c0d0561f1e5e969c77f8c0ebb53cee7ffff5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:06.721564  388833 certs.go:382] copying /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/apiserver.crt.52c83bcc -> /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/apiserver.crt
	I1210 06:15:06.721689  388833 certs.go:386] copying /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/apiserver.key.52c83bcc -> /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/apiserver.key
	I1210 06:15:06.721803  388833 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/proxy-client.key
	I1210 06:15:06.721830  388833 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/proxy-client.crt with IP's: []
	I1210 06:15:06.763647  388833 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/proxy-client.crt ...
	I1210 06:15:06.763667  388833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/proxy-client.crt: {Name:mk3de019be99c3c707ea83fb17418bc0087f5d69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:06.763788  388833 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/proxy-client.key ...
	I1210 06:15:06.763800  388833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/proxy-client.key: {Name:mk17d65bdaa97fa589b561974987dee32b3e9132 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:06.763980  388833 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253.pem (1338 bytes)
	W1210 06:15:06.764060  388833 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253_empty.pem, impossibly tiny 0 bytes
	I1210 06:15:06.764075  388833 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:15:06.764123  388833 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:15:06.764166  388833 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:15:06.764202  388833 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem (1679 bytes)
	I1210 06:15:06.764269  388833 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:15:06.765174  388833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:15:06.788605  388833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:15:06.807276  388833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:15:06.826209  388833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:15:06.846642  388833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:15:06.866684  388833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 06:15:06.886307  388833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:15:06.906302  388833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 06:15:06.928052  388833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:15:06.950444  388833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253.pem --> /usr/share/ca-certificates/9253.pem (1338 bytes)
	I1210 06:15:06.969613  388833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /usr/share/ca-certificates/92532.pem (1708 bytes)
	I1210 06:15:06.988837  388833 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:15:07.003179  388833 ssh_runner.go:195] Run: openssl version
	I1210 06:15:07.010508  388833 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:07.020349  388833 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:15:07.028855  388833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:07.033121  388833 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:07.033171  388833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:07.080981  388833 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:15:07.089960  388833 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 06:15:07.099367  388833 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9253.pem
	I1210 06:15:07.107736  388833 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9253.pem /etc/ssl/certs/9253.pem
	I1210 06:15:07.114947  388833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9253.pem
	I1210 06:15:07.118449  388833 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:37 /usr/share/ca-certificates/9253.pem
	I1210 06:15:07.118510  388833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9253.pem
	I1210 06:15:07.166141  388833 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:15:07.174532  388833 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9253.pem /etc/ssl/certs/51391683.0
	I1210 06:15:07.182247  388833 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/92532.pem
	I1210 06:15:07.189250  388833 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/92532.pem /etc/ssl/certs/92532.pem
	I1210 06:15:07.196368  388833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92532.pem
	I1210 06:15:07.199974  388833 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:37 /usr/share/ca-certificates/92532.pem
	I1210 06:15:07.200024  388833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92532.pem
	I1210 06:15:07.241262  388833 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:15:07.249112  388833 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/92532.pem /etc/ssl/certs/3ec20f2e.0
	I1210 06:15:07.256617  388833 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:15:07.260118  388833 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 06:15:07.260181  388833 kubeadm.go:401] StartCluster: {Name:newest-cni-218688 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-218688 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:15:07.260259  388833 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:15:07.260313  388833 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:15:07.287937  388833 cri.go:89] found id: ""
	I1210 06:15:07.287992  388833 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:15:07.295896  388833 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:15:07.303702  388833 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:15:07.303767  388833 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:15:07.311364  388833 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:15:07.311384  388833 kubeadm.go:158] found existing configuration files:
	
	I1210 06:15:07.311424  388833 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 06:15:07.318748  388833 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:15:07.318798  388833 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:15:07.325926  388833 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 06:15:07.333353  388833 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:15:07.333404  388833 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:15:07.340470  388833 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 06:15:07.349986  388833 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:15:07.350033  388833 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:15:07.358707  388833 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 06:15:07.366333  388833 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:15:07.366379  388833 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:15:07.373654  388833 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:15:07.419061  388833 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 06:15:07.419165  388833 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:15:07.524052  388833 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:15:07.524145  388833 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1210 06:15:07.524230  388833 kubeadm.go:319] OS: Linux
	I1210 06:15:07.524308  388833 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:15:07.524392  388833 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:15:07.524490  388833 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:15:07.524567  388833 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:15:07.524644  388833 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:15:07.524706  388833 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:15:07.524803  388833 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:15:07.524877  388833 kubeadm.go:319] CGROUPS_IO: enabled
	I1210 06:15:07.611468  388833 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:15:07.611646  388833 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:15:07.611773  388833 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:15:07.624931  388833 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:15:07.567619  389191 node_ready.go:49] node "embed-certs-028500" is "Ready"
	I1210 06:15:07.567649  389191 node_ready.go:38] duration metric: took 1.464534118s for node "embed-certs-028500" to be "Ready" ...
	I1210 06:15:07.567665  389191 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:15:07.567719  389191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:15:08.106174  389191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.990640212s)
	I1210 06:15:08.106272  389191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.965474755s)
	I1210 06:15:08.106409  389191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.828647133s)
	I1210 06:15:08.106432  389191 api_server.go:72] duration metric: took 2.18576772s to wait for apiserver process to appear ...
	I1210 06:15:08.106445  389191 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:15:08.106465  389191 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 06:15:08.108237  389191 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-028500 addons enable metrics-server
	
	I1210 06:15:08.112459  389191 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:08.112484  389191 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:08.120156  389191 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1210 06:15:07.627254  388833 out.go:252]   - Generating certificates and keys ...
	I1210 06:15:07.627388  388833 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:15:07.627547  388833 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:15:07.654918  388833 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 06:15:07.685197  388833 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 06:15:07.726616  388833 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 06:15:07.741205  388833 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 06:15:07.937965  388833 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 06:15:07.938182  388833 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-218688] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 06:15:08.083627  388833 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 06:15:08.083799  388833 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-218688] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 06:15:08.117630  388833 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 06:15:08.444067  388833 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 06:15:08.481509  388833 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 06:15:08.481664  388833 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:15:08.629793  388833 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:15:08.784672  388833 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:15:08.912623  388833 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:15:08.982623  388833 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:15:09.041301  388833 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:15:09.042069  388833 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:15:09.046169  388833 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1210 06:15:08.514319  383776 pod_ready.go:104] pod "coredns-7d764666f9-tnm7t" is not "Ready", error: <nil>
	W1210 06:15:11.012522  383776 pod_ready.go:104] pod "coredns-7d764666f9-tnm7t" is not "Ready", error: <nil>
	I1210 06:15:09.047513  388833 out.go:252]   - Booting up control plane ...
	I1210 06:15:09.047629  388833 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:15:09.047771  388833 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:15:09.048682  388833 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:15:09.064568  388833 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:15:09.064719  388833 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:15:09.071416  388833 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:15:09.071705  388833 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:15:09.071762  388833 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:15:09.174032  388833 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:15:09.174190  388833 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:15:09.675873  388833 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.820309ms
	I1210 06:15:09.680105  388833 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 06:15:09.680202  388833 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1210 06:15:09.680279  388833 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 06:15:09.680358  388833 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 06:15:10.685143  388833 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004976063s
	I1210 06:15:08.121020  389191 addons.go:530] duration metric: took 2.20001005s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1210 06:15:08.607308  389191 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 06:15:08.612418  389191 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:08.612444  389191 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:09.106971  389191 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 06:15:09.111215  389191 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1210 06:15:09.112278  389191 api_server.go:141] control plane version: v1.34.3
	I1210 06:15:09.112300  389191 api_server.go:131] duration metric: took 1.005849343s to wait for apiserver health ...
	I1210 06:15:09.112309  389191 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:15:09.115678  389191 system_pods.go:59] 8 kube-system pods found
	I1210 06:15:09.115712  389191 system_pods.go:61] "coredns-66bc5c9577-8xwfc" [7ad22b4a-5d1a-403a-a57e-69745116eb0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:15:09.115728  389191 system_pods.go:61] "etcd-embed-certs-028500" [f56da20c-a457-4f29-98f3-3b29ea6fcbf3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:15:09.115744  389191 system_pods.go:61] "kindnet-6gq2z" [cce0711c-ff56-4335-b244-17f0180eb4d4] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1210 06:15:09.115750  389191 system_pods.go:61] "kube-apiserver-embed-certs-028500" [3965275f-b1f9-4996-99e7-c070bdfa875d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:15:09.115759  389191 system_pods.go:61] "kube-controller-manager-embed-certs-028500" [c513486a-c2d7-4083-acf4-075177467d76] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:15:09.115774  389191 system_pods.go:61] "kube-proxy-sr7kh" [0b34d810-7015-47ad-98a2-41d80c02a77e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 06:15:09.115782  389191 system_pods.go:61] "kube-scheduler-embed-certs-028500" [0a991394-8849-4863-9251-0f883f13c49a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:15:09.115787  389191 system_pods.go:61] "storage-provisioner" [c6fe10b9-7d0d-4911-afc6-65b935770c41] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:15:09.115795  389191 system_pods.go:74] duration metric: took 3.481249ms to wait for pod list to return data ...
	I1210 06:15:09.115804  389191 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:15:09.118042  389191 default_sa.go:45] found service account: "default"
	I1210 06:15:09.118060  389191 default_sa.go:55] duration metric: took 2.250472ms for default service account to be created ...
	I1210 06:15:09.118068  389191 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 06:15:09.120540  389191 system_pods.go:86] 8 kube-system pods found
	I1210 06:15:09.120571  389191 system_pods.go:89] "coredns-66bc5c9577-8xwfc" [7ad22b4a-5d1a-403a-a57e-69745116eb0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:15:09.120582  389191 system_pods.go:89] "etcd-embed-certs-028500" [f56da20c-a457-4f29-98f3-3b29ea6fcbf3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:15:09.120592  389191 system_pods.go:89] "kindnet-6gq2z" [cce0711c-ff56-4335-b244-17f0180eb4d4] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1210 06:15:09.120607  389191 system_pods.go:89] "kube-apiserver-embed-certs-028500" [3965275f-b1f9-4996-99e7-c070bdfa875d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:15:09.120615  389191 system_pods.go:89] "kube-controller-manager-embed-certs-028500" [c513486a-c2d7-4083-acf4-075177467d76] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:15:09.120622  389191 system_pods.go:89] "kube-proxy-sr7kh" [0b34d810-7015-47ad-98a2-41d80c02a77e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 06:15:09.120628  389191 system_pods.go:89] "kube-scheduler-embed-certs-028500" [0a991394-8849-4863-9251-0f883f13c49a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:15:09.120638  389191 system_pods.go:89] "storage-provisioner" [c6fe10b9-7d0d-4911-afc6-65b935770c41] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:15:09.120649  389191 system_pods.go:126] duration metric: took 2.574435ms to wait for k8s-apps to be running ...
	I1210 06:15:09.120661  389191 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 06:15:09.120706  389191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:15:09.132328  389191 system_svc.go:56] duration metric: took 11.665031ms WaitForService to wait for kubelet
	I1210 06:15:09.132347  389191 kubeadm.go:587] duration metric: took 3.211683874s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:15:09.132362  389191 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:15:09.134657  389191 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:15:09.134675  389191 node_conditions.go:123] node cpu capacity is 8
	I1210 06:15:09.134692  389191 node_conditions.go:105] duration metric: took 2.324204ms to run NodePressure ...
	I1210 06:15:09.134708  389191 start.go:242] waiting for startup goroutines ...
	I1210 06:15:09.134719  389191 start.go:247] waiting for cluster config update ...
	I1210 06:15:09.134732  389191 start.go:256] writing updated cluster config ...
	I1210 06:15:09.134972  389191 ssh_runner.go:195] Run: rm -f paused
	I1210 06:15:09.138326  389191 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:15:09.141234  389191 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8xwfc" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 06:15:11.146914  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	I1210 06:15:11.859825  388833 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.179606477s
	I1210 06:15:13.682511  388833 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002397694s
	I1210 06:15:13.702667  388833 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 06:15:13.713410  388833 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 06:15:13.721957  388833 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 06:15:13.722239  388833 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-218688 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 06:15:13.730608  388833 kubeadm.go:319] [bootstrap-token] Using token: p04ebg.bb0bv44e5xs1djbe
	
	
	==> CRI-O <==
	Dec 10 06:15:01 default-k8s-diff-port-125336 crio[768]: time="2025-12-10T06:15:01.899264841Z" level=info msg="Starting container: cd90c98937dcb7062ecb8882b7e3e183164c5c5610f37a97f94822c14779dc72" id=35cfdcc8-1239-4838-bc41-350991ae1a7b name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:15:01 default-k8s-diff-port-125336 crio[768]: time="2025-12-10T06:15:01.90167725Z" level=info msg="Started container" PID=2995 containerID=cd90c98937dcb7062ecb8882b7e3e183164c5c5610f37a97f94822c14779dc72 description=kube-system/coredns-66bc5c9577-gkk6m/coredns id=35cfdcc8-1239-4838-bc41-350991ae1a7b name=/runtime.v1.RuntimeService/StartContainer sandboxID=1f220cd75ca58157a94f98e4e469afd356f512df4c95bc636d72b2b284341434
	Dec 10 06:15:05 default-k8s-diff-port-125336 crio[768]: time="2025-12-10T06:15:05.290405708Z" level=info msg="Running pod sandbox: default/busybox/POD" id=12326517-695d-46c9-b569-97ccde930ea9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:15:05 default-k8s-diff-port-125336 crio[768]: time="2025-12-10T06:15:05.290500402Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:05 default-k8s-diff-port-125336 crio[768]: time="2025-12-10T06:15:05.296816322Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:882fecf8de66a9a48f674978729fbb0caf39a17371cbe92c4945676f27b5d044 UID:d26c68d7-e6ef-4b9c-9cb5-08387e67e53f NetNS:/var/run/netns/44df7457-1692-409e-9dfb-2870fcb75396 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005aec58}] Aliases:map[]}"
	Dec 10 06:15:05 default-k8s-diff-port-125336 crio[768]: time="2025-12-10T06:15:05.296863185Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 10 06:15:05 default-k8s-diff-port-125336 crio[768]: time="2025-12-10T06:15:05.308235964Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:882fecf8de66a9a48f674978729fbb0caf39a17371cbe92c4945676f27b5d044 UID:d26c68d7-e6ef-4b9c-9cb5-08387e67e53f NetNS:/var/run/netns/44df7457-1692-409e-9dfb-2870fcb75396 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005aec58}] Aliases:map[]}"
	Dec 10 06:15:05 default-k8s-diff-port-125336 crio[768]: time="2025-12-10T06:15:05.30841475Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 10 06:15:05 default-k8s-diff-port-125336 crio[768]: time="2025-12-10T06:15:05.309413681Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 06:15:05 default-k8s-diff-port-125336 crio[768]: time="2025-12-10T06:15:05.310739284Z" level=info msg="Ran pod sandbox 882fecf8de66a9a48f674978729fbb0caf39a17371cbe92c4945676f27b5d044 with infra container: default/busybox/POD" id=12326517-695d-46c9-b569-97ccde930ea9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:15:05 default-k8s-diff-port-125336 crio[768]: time="2025-12-10T06:15:05.312105198Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=71f382c5-1c76-4694-89ff-b2894221b892 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:15:05 default-k8s-diff-port-125336 crio[768]: time="2025-12-10T06:15:05.312205527Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=71f382c5-1c76-4694-89ff-b2894221b892 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:15:05 default-k8s-diff-port-125336 crio[768]: time="2025-12-10T06:15:05.312235674Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=71f382c5-1c76-4694-89ff-b2894221b892 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:15:05 default-k8s-diff-port-125336 crio[768]: time="2025-12-10T06:15:05.312879793Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=364ae947-13c6-4c8c-ac3d-17d96bd808a8 name=/runtime.v1.ImageService/PullImage
	Dec 10 06:15:05 default-k8s-diff-port-125336 crio[768]: time="2025-12-10T06:15:05.315959577Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 10 06:15:05 default-k8s-diff-port-125336 crio[768]: time="2025-12-10T06:15:05.965729134Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=364ae947-13c6-4c8c-ac3d-17d96bd808a8 name=/runtime.v1.ImageService/PullImage
	Dec 10 06:15:05 default-k8s-diff-port-125336 crio[768]: time="2025-12-10T06:15:05.966738707Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=83a47053-90a5-4bd0-a1d8-32b5f17df779 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:15:05 default-k8s-diff-port-125336 crio[768]: time="2025-12-10T06:15:05.969197587Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=63fb654d-d6b9-4e42-b4f4-a82fdc0361df name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:15:05 default-k8s-diff-port-125336 crio[768]: time="2025-12-10T06:15:05.973222861Z" level=info msg="Creating container: default/busybox/busybox" id=39e13dd0-2795-45f4-9646-448dd1a3878f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:15:05 default-k8s-diff-port-125336 crio[768]: time="2025-12-10T06:15:05.973364218Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:05 default-k8s-diff-port-125336 crio[768]: time="2025-12-10T06:15:05.980686218Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:05 default-k8s-diff-port-125336 crio[768]: time="2025-12-10T06:15:05.981300056Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:06 default-k8s-diff-port-125336 crio[768]: time="2025-12-10T06:15:06.010953998Z" level=info msg="Created container cedaea6d5973df896ac31a352d2526973b9b09e49a94960b424726d0048fc06a: default/busybox/busybox" id=39e13dd0-2795-45f4-9646-448dd1a3878f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:15:06 default-k8s-diff-port-125336 crio[768]: time="2025-12-10T06:15:06.012063119Z" level=info msg="Starting container: cedaea6d5973df896ac31a352d2526973b9b09e49a94960b424726d0048fc06a" id=02748d7f-cfd6-4fb1-9cfe-743c3aeb1711 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:15:06 default-k8s-diff-port-125336 crio[768]: time="2025-12-10T06:15:06.014868862Z" level=info msg="Started container" PID=3060 containerID=cedaea6d5973df896ac31a352d2526973b9b09e49a94960b424726d0048fc06a description=default/busybox/busybox id=02748d7f-cfd6-4fb1-9cfe-743c3aeb1711 name=/runtime.v1.RuntimeService/StartContainer sandboxID=882fecf8de66a9a48f674978729fbb0caf39a17371cbe92c4945676f27b5d044
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	cedaea6d5973d       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   882fecf8de66a       busybox                                                default
	cd90c98937dcb       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   1f220cd75ca58       coredns-66bc5c9577-gkk6m                               kube-system
	6233c1fe9e6c8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   b9e2530cc9812       storage-provisioner                                    kube-system
	8834fcdc823db       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    24 seconds ago      Running             kindnet-cni               0                   a906bda92cc57       kindnet-lfds9                                          kube-system
	6febd0d5cbe8e       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                      26 seconds ago      Running             kube-proxy                0                   d109ec3ca0e26       kube-proxy-mw5sp                                       kube-system
	06a4ce5e1db1c       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                      37 seconds ago      Running             kube-controller-manager   0                   15c3c7a61a35d       kube-controller-manager-default-k8s-diff-port-125336   kube-system
	2f7df8952ed09       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                      37 seconds ago      Running             kube-scheduler            0                   124e48a1c72fd       kube-scheduler-default-k8s-diff-port-125336            kube-system
	dad4cce4ee0a8       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      37 seconds ago      Running             etcd                      0                   ff4e5b5b36a1b       etcd-default-k8s-diff-port-125336                      kube-system
	d23f99125cc1c       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                      37 seconds ago      Running             kube-apiserver            0                   9892b8c5a522d       kube-apiserver-default-k8s-diff-port-125336            kube-system
	
	
	==> coredns [cd90c98937dcb7062ecb8882b7e3e183164c5c5610f37a97f94822c14779dc72] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33406 - 48329 "HINFO IN 4811361290440495144.2161168055270271907. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.111068823s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-125336
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-125336
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602
	                    minikube.k8s.io/name=default-k8s-diff-port-125336
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_14_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:14:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-125336
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:15:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:15:12 +0000   Wed, 10 Dec 2025 06:14:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:15:12 +0000   Wed, 10 Dec 2025 06:14:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:15:12 +0000   Wed, 10 Dec 2025 06:14:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 06:15:12 +0000   Wed, 10 Dec 2025 06:15:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-125336
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                f4329173-01c3-494e-8c73-1314ca67fddf
	  Boot ID:                    b1b789e7-29ca-41f0-9541-8c4ef16372aa
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-gkk6m                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-default-k8s-diff-port-125336                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-lfds9                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-default-k8s-diff-port-125336             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-125336    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-mw5sp                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-default-k8s-diff-port-125336             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 33s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  33s   kubelet          Node default-k8s-diff-port-125336 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s   kubelet          Node default-k8s-diff-port-125336 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s   kubelet          Node default-k8s-diff-port-125336 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node default-k8s-diff-port-125336 event: Registered Node default-k8s-diff-port-125336 in Controller
	  Normal  NodeReady                14s   kubelet          Node default-k8s-diff-port-125336 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e ac 6a 3a 10 14 08 06
	[  +0.000389] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e1 45 1e 59 dc 08 06
	[ +12.231886] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff aa b6 c3 b5 b8 e1 08 06
	[  +0.018522] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff f6 5b 96 ba 91 6c 08 06
	[Dec10 06:13] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 91 ba 36 9a ae 08 06
	[  +0.002987] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 7f a1 c5 f7 73 08 06
	[  +1.205570] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 04 b2 ab d7 49 08 06
	[  +4.623767] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 10 2d 23 5f e6 08 06
	[  +0.000315] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 5b 96 ba 91 6c 08 06
	[ +12.537493] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 fa d0 2a 46 66 08 06
	[  +0.000395] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 16 04 b2 ab d7 49 08 06
	[ +31.413502] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 1b 61 8f e3 57 08 06
	[  +0.000352] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fa 91 ba 36 9a ae 08 06
	
	
	==> etcd [dad4cce4ee0a80cf7acb7e14966c8c2b3ff76e1d8740318fd6308c0e07d850f1] <==
	{"level":"warn","ts":"2025-12-10T06:14:38.571825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:38.578759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:38.586402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:38.593569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:38.601724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:38.608456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:38.616875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:38.624537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:38.632853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:38.641121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:38.648936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:38.655665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:38.666343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:38.674243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:38.682271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:38.704188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:38.712470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:38.720576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:14:38.773923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52470","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-10T06:15:01.054177Z","caller":"traceutil/trace.go:172","msg":"trace[105285398] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"137.523249ms","start":"2025-12-10T06:15:00.916637Z","end":"2025-12-10T06:15:01.054160Z","steps":["trace[105285398] 'process raft request'  (duration: 125.579412ms)","trace[105285398] 'compare'  (duration: 11.831453ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T06:15:01.091619Z","caller":"traceutil/trace.go:172","msg":"trace[243547022] transaction","detail":"{read_only:false; response_revision:397; number_of_response:1; }","duration":"161.359324ms","start":"2025-12-10T06:15:00.930237Z","end":"2025-12-10T06:15:01.091596Z","steps":["trace[243547022] 'process raft request'  (duration: 161.267401ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T06:15:01.091629Z","caller":"traceutil/trace.go:172","msg":"trace[844302501] transaction","detail":"{read_only:false; response_revision:396; number_of_response:1; }","duration":"163.42967ms","start":"2025-12-10T06:15:00.928185Z","end":"2025-12-10T06:15:01.091615Z","steps":["trace[844302501] 'process raft request'  (duration: 163.233441ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T06:15:01.247406Z","caller":"traceutil/trace.go:172","msg":"trace[646820551] transaction","detail":"{read_only:false; response_revision:398; number_of_response:1; }","duration":"146.938127ms","start":"2025-12-10T06:15:01.100445Z","end":"2025-12-10T06:15:01.247383Z","steps":["trace[646820551] 'process raft request'  (duration: 127.992366ms)","trace[646820551] 'compare'  (duration: 18.815902ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T06:15:01.408485Z","caller":"traceutil/trace.go:172","msg":"trace[1278307724] transaction","detail":"{read_only:false; response_revision:399; number_of_response:1; }","duration":"155.838992ms","start":"2025-12-10T06:15:01.252618Z","end":"2025-12-10T06:15:01.408457Z","steps":["trace[1278307724] 'process raft request'  (duration: 136.26578ms)","trace[1278307724] 'compare'  (duration: 19.476153ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T06:15:01.504293Z","caller":"traceutil/trace.go:172","msg":"trace[984832461] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"251.542704ms","start":"2025-12-10T06:15:01.252728Z","end":"2025-12-10T06:15:01.504271Z","steps":["trace[984832461] 'process raft request'  (duration: 251.414668ms)"],"step_count":1}
	
	
	==> kernel <==
	 06:15:14 up 57 min,  0 user,  load average: 6.41, 4.81, 3.00
	Linux default-k8s-diff-port-125336 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8834fcdc823db14c89b67962e49763fd192475007587c880f55c0719fdeeb63c] <==
	I1210 06:14:50.385054       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 06:14:50.385325       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1210 06:14:50.385445       1 main.go:148] setting mtu 1500 for CNI 
	I1210 06:14:50.385458       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 06:14:50.385477       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T06:14:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 06:14:50.662677       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 06:14:50.662707       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 06:14:50.662718       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 06:14:50.662851       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 06:14:50.981880       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 06:14:50.981917       1 metrics.go:72] Registering metrics
	I1210 06:14:50.982010       1 controller.go:711] "Syncing nftables rules"
	I1210 06:15:00.667195       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1210 06:15:00.667251       1 main.go:301] handling current node
	I1210 06:15:10.665189       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1210 06:15:10.665218       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d23f99125cc1ce4ceb9413b9ca080b7984f088a0b25a697cc5b46e96f947d2a6] <==
	I1210 06:14:39.298640       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 06:14:39.299890       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1210 06:14:39.299962       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1210 06:14:39.301672       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1210 06:14:39.302874       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:14:39.304253       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1210 06:14:39.309804       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1210 06:14:39.328043       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:14:40.201818       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1210 06:14:40.207523       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1210 06:14:40.207541       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 06:14:40.648178       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:14:40.681301       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:14:40.806394       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1210 06:14:40.812834       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1210 06:14:40.814167       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 06:14:40.818229       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:14:41.233433       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 06:14:41.713496       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 06:14:41.723331       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1210 06:14:41.729955       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1210 06:14:46.889542       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 06:14:46.994134       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1210 06:14:47.136794       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:14:47.141972       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [06a4ce5e1db1ccb06a6f9a602a02347a517b8885fce1434a60211dbed8975d65] <==
	I1210 06:14:46.231845       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1210 06:14:46.231813       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 06:14:46.231857       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1210 06:14:46.231886       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1210 06:14:46.231895       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1210 06:14:46.231949       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1210 06:14:46.232175       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1210 06:14:46.232609       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1210 06:14:46.232718       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1210 06:14:46.232720       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1210 06:14:46.232820       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1210 06:14:46.232894       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1210 06:14:46.233028       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1210 06:14:46.233985       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1210 06:14:46.234052       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1210 06:14:46.234074       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1210 06:14:46.234184       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1210 06:14:46.236373       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1210 06:14:46.238614       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1210 06:14:46.239814       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 06:14:46.242969       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1210 06:14:46.249485       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1210 06:14:46.254790       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 06:14:46.254791       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1210 06:15:01.183461       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6febd0d5cbe8ea4fbf63db960cdf3e7409d6ae9438f9cc49ed2c0bf0c097f56b] <==
	I1210 06:14:48.014014       1 server_linux.go:53] "Using iptables proxy"
	I1210 06:14:48.074818       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 06:14:48.175936       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 06:14:48.175970       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1210 06:14:48.176072       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:14:48.194443       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:14:48.194490       1 server_linux.go:132] "Using iptables Proxier"
	I1210 06:14:48.199759       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:14:48.200215       1 server.go:527] "Version info" version="v1.34.3"
	I1210 06:14:48.200236       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:14:48.201554       1 config.go:200] "Starting service config controller"
	I1210 06:14:48.201567       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:14:48.201610       1 config.go:309] "Starting node config controller"
	I1210 06:14:48.201625       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:14:48.201630       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:14:48.201872       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:14:48.201936       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:14:48.201900       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:14:48.201961       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:14:48.302748       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 06:14:48.302780       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 06:14:48.302757       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [2f7df8952ed098a34be1333b3c9143e4332ba5ce8348e7110d13efd8faea6d4c] <==
	E1210 06:14:39.268657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 06:14:39.268818       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 06:14:39.270071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 06:14:39.270155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 06:14:39.270514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 06:14:39.270548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 06:14:39.270595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 06:14:39.270610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 06:14:39.270637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 06:14:39.270374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 06:14:39.270398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 06:14:39.270651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 06:14:39.270703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 06:14:39.270729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 06:14:39.270450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 06:14:40.112910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 06:14:40.116236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 06:14:40.206496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 06:14:40.356257       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 06:14:40.395698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 06:14:40.428913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 06:14:40.430819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 06:14:40.470988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 06:14:40.476047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1210 06:14:40.766550       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 06:14:46 default-k8s-diff-port-125336 kubelet[2392]: I1210 06:14:46.288548    2392 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 10 06:14:47 default-k8s-diff-port-125336 kubelet[2392]: I1210 06:14:47.083232    2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/14d4cc08-bd99-41e5-a772-b5197e8b16b6-xtables-lock\") pod \"kindnet-lfds9\" (UID: \"14d4cc08-bd99-41e5-a772-b5197e8b16b6\") " pod="kube-system/kindnet-lfds9"
	Dec 10 06:14:47 default-k8s-diff-port-125336 kubelet[2392]: I1210 06:14:47.083328    2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14d4cc08-bd99-41e5-a772-b5197e8b16b6-lib-modules\") pod \"kindnet-lfds9\" (UID: \"14d4cc08-bd99-41e5-a772-b5197e8b16b6\") " pod="kube-system/kindnet-lfds9"
	Dec 10 06:14:47 default-k8s-diff-port-125336 kubelet[2392]: I1210 06:14:47.083355    2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94c4f93c-3851-4ed9-ae3b-7900e64abf9f-lib-modules\") pod \"kube-proxy-mw5sp\" (UID: \"94c4f93c-3851-4ed9-ae3b-7900e64abf9f\") " pod="kube-system/kube-proxy-mw5sp"
	Dec 10 06:14:47 default-k8s-diff-port-125336 kubelet[2392]: I1210 06:14:47.083379    2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpqgk\" (UniqueName: \"kubernetes.io/projected/94c4f93c-3851-4ed9-ae3b-7900e64abf9f-kube-api-access-xpqgk\") pod \"kube-proxy-mw5sp\" (UID: \"94c4f93c-3851-4ed9-ae3b-7900e64abf9f\") " pod="kube-system/kube-proxy-mw5sp"
	Dec 10 06:14:47 default-k8s-diff-port-125336 kubelet[2392]: I1210 06:14:47.083415    2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/14d4cc08-bd99-41e5-a772-b5197e8b16b6-cni-cfg\") pod \"kindnet-lfds9\" (UID: \"14d4cc08-bd99-41e5-a772-b5197e8b16b6\") " pod="kube-system/kindnet-lfds9"
	Dec 10 06:14:47 default-k8s-diff-port-125336 kubelet[2392]: I1210 06:14:47.083438    2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4vh4\" (UniqueName: \"kubernetes.io/projected/14d4cc08-bd99-41e5-a772-b5197e8b16b6-kube-api-access-c4vh4\") pod \"kindnet-lfds9\" (UID: \"14d4cc08-bd99-41e5-a772-b5197e8b16b6\") " pod="kube-system/kindnet-lfds9"
	Dec 10 06:14:47 default-k8s-diff-port-125336 kubelet[2392]: I1210 06:14:47.083462    2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/94c4f93c-3851-4ed9-ae3b-7900e64abf9f-kube-proxy\") pod \"kube-proxy-mw5sp\" (UID: \"94c4f93c-3851-4ed9-ae3b-7900e64abf9f\") " pod="kube-system/kube-proxy-mw5sp"
	Dec 10 06:14:47 default-k8s-diff-port-125336 kubelet[2392]: I1210 06:14:47.083490    2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94c4f93c-3851-4ed9-ae3b-7900e64abf9f-xtables-lock\") pod \"kube-proxy-mw5sp\" (UID: \"94c4f93c-3851-4ed9-ae3b-7900e64abf9f\") " pod="kube-system/kube-proxy-mw5sp"
	Dec 10 06:14:47 default-k8s-diff-port-125336 kubelet[2392]: E1210 06:14:47.190908    2392 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 10 06:14:47 default-k8s-diff-port-125336 kubelet[2392]: E1210 06:14:47.190942    2392 projected.go:196] Error preparing data for projected volume kube-api-access-c4vh4 for pod kube-system/kindnet-lfds9: configmap "kube-root-ca.crt" not found
	Dec 10 06:14:47 default-k8s-diff-port-125336 kubelet[2392]: E1210 06:14:47.191032    2392 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/14d4cc08-bd99-41e5-a772-b5197e8b16b6-kube-api-access-c4vh4 podName:14d4cc08-bd99-41e5-a772-b5197e8b16b6 nodeName:}" failed. No retries permitted until 2025-12-10 06:14:47.690999642 +0000 UTC m=+6.235520525 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-c4vh4" (UniqueName: "kubernetes.io/projected/14d4cc08-bd99-41e5-a772-b5197e8b16b6-kube-api-access-c4vh4") pod "kindnet-lfds9" (UID: "14d4cc08-bd99-41e5-a772-b5197e8b16b6") : configmap "kube-root-ca.crt" not found
	Dec 10 06:14:47 default-k8s-diff-port-125336 kubelet[2392]: E1210 06:14:47.191386    2392 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 10 06:14:47 default-k8s-diff-port-125336 kubelet[2392]: E1210 06:14:47.191420    2392 projected.go:196] Error preparing data for projected volume kube-api-access-xpqgk for pod kube-system/kube-proxy-mw5sp: configmap "kube-root-ca.crt" not found
	Dec 10 06:14:47 default-k8s-diff-port-125336 kubelet[2392]: E1210 06:14:47.191489    2392 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/94c4f93c-3851-4ed9-ae3b-7900e64abf9f-kube-api-access-xpqgk podName:94c4f93c-3851-4ed9-ae3b-7900e64abf9f nodeName:}" failed. No retries permitted until 2025-12-10 06:14:47.691462986 +0000 UTC m=+6.235989974 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xpqgk" (UniqueName: "kubernetes.io/projected/94c4f93c-3851-4ed9-ae3b-7900e64abf9f-kube-api-access-xpqgk") pod "kube-proxy-mw5sp" (UID: "94c4f93c-3851-4ed9-ae3b-7900e64abf9f") : configmap "kube-root-ca.crt" not found
	Dec 10 06:14:48 default-k8s-diff-port-125336 kubelet[2392]: I1210 06:14:48.611910    2392 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mw5sp" podStartSLOduration=1.611885504 podStartE2EDuration="1.611885504s" podCreationTimestamp="2025-12-10 06:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:14:48.611845822 +0000 UTC m=+7.156366710" watchObservedRunningTime="2025-12-10 06:14:48.611885504 +0000 UTC m=+7.156406390"
	Dec 10 06:14:52 default-k8s-diff-port-125336 kubelet[2392]: I1210 06:14:52.667936    2392 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-lfds9" podStartSLOduration=3.4569583489999998 podStartE2EDuration="5.667914438s" podCreationTimestamp="2025-12-10 06:14:47 +0000 UTC" firstStartedPulling="2025-12-10 06:14:47.937297194 +0000 UTC m=+6.481818078" lastFinishedPulling="2025-12-10 06:14:50.148253289 +0000 UTC m=+8.692774167" observedRunningTime="2025-12-10 06:14:50.633255358 +0000 UTC m=+9.177776244" watchObservedRunningTime="2025-12-10 06:14:52.667914438 +0000 UTC m=+11.212435325"
	Dec 10 06:15:00 default-k8s-diff-port-125336 kubelet[2392]: I1210 06:15:00.926195    2392 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 10 06:15:01 default-k8s-diff-port-125336 kubelet[2392]: I1210 06:15:01.587763    2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48z9x\" (UniqueName: \"kubernetes.io/projected/d31f981a-faff-40fd-87cd-c2e5b25f8e2a-kube-api-access-48z9x\") pod \"storage-provisioner\" (UID: \"d31f981a-faff-40fd-87cd-c2e5b25f8e2a\") " pod="kube-system/storage-provisioner"
	Dec 10 06:15:01 default-k8s-diff-port-125336 kubelet[2392]: I1210 06:15:01.587827    2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxgt4\" (UniqueName: \"kubernetes.io/projected/0b83f27c-1359-488f-bf61-c716f522dfad-kube-api-access-bxgt4\") pod \"coredns-66bc5c9577-gkk6m\" (UID: \"0b83f27c-1359-488f-bf61-c716f522dfad\") " pod="kube-system/coredns-66bc5c9577-gkk6m"
	Dec 10 06:15:01 default-k8s-diff-port-125336 kubelet[2392]: I1210 06:15:01.587860    2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d31f981a-faff-40fd-87cd-c2e5b25f8e2a-tmp\") pod \"storage-provisioner\" (UID: \"d31f981a-faff-40fd-87cd-c2e5b25f8e2a\") " pod="kube-system/storage-provisioner"
	Dec 10 06:15:01 default-k8s-diff-port-125336 kubelet[2392]: I1210 06:15:01.587939    2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b83f27c-1359-488f-bf61-c716f522dfad-config-volume\") pod \"coredns-66bc5c9577-gkk6m\" (UID: \"0b83f27c-1359-488f-bf61-c716f522dfad\") " pod="kube-system/coredns-66bc5c9577-gkk6m"
	Dec 10 06:15:02 default-k8s-diff-port-125336 kubelet[2392]: I1210 06:15:02.772653    2392 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-gkk6m" podStartSLOduration=15.772628144 podStartE2EDuration="15.772628144s" podCreationTimestamp="2025-12-10 06:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:15:02.653765114 +0000 UTC m=+21.198286001" watchObservedRunningTime="2025-12-10 06:15:02.772628144 +0000 UTC m=+21.317149030"
	Dec 10 06:15:02 default-k8s-diff-port-125336 kubelet[2392]: I1210 06:15:02.784369    2392 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.784349505 podStartE2EDuration="15.784349505s" podCreationTimestamp="2025-12-10 06:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:15:02.772543104 +0000 UTC m=+21.317064006" watchObservedRunningTime="2025-12-10 06:15:02.784349505 +0000 UTC m=+21.328870391"
	Dec 10 06:15:05 default-k8s-diff-port-125336 kubelet[2392]: I1210 06:15:05.113206    2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4qrp\" (UniqueName: \"kubernetes.io/projected/d26c68d7-e6ef-4b9c-9cb5-08387e67e53f-kube-api-access-g4qrp\") pod \"busybox\" (UID: \"d26c68d7-e6ef-4b9c-9cb5-08387e67e53f\") " pod="default/busybox"
	
	
	==> storage-provisioner [6233c1fe9e6c89a4f019a21e1ae93ad6c457b644adf8338a88b9ca09a3a1e670] <==
	I1210 06:15:01.882624       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 06:15:01.892808       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 06:15:01.892859       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1210 06:15:01.895438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:01.902354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:15:01.902526       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 06:15:01.902686       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-125336_ccc963ce-afba-4fdf-97d6-2604bfd5acfe!
	I1210 06:15:01.902691       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8e5ce82f-82e7-4b42-b704-b5ef142d393d", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-125336_ccc963ce-afba-4fdf-97d6-2604bfd5acfe became leader
	W1210 06:15:01.907418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:01.913122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:15:02.003899       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-125336_ccc963ce-afba-4fdf-97d6-2604bfd5acfe!
	W1210 06:15:03.916656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:03.920912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:05.929227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:05.938523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:07.941787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:07.946197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:09.951064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:09.956784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:11.960357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:11.964719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:13.969012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:13.972967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-125336 -n default-k8s-diff-port-125336
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-125336 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-218688 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-218688 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (245.391442ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:15:20Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-218688 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-218688
helpers_test.go:244: (dbg) docker inspect newest-cni-218688:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "14958bae78d39469fc1fc1f95bca29ccbf6b7db0ff36525ccb2480de8418941e",
	        "Created": "2025-12-10T06:15:01.877568819Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 390320,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:15:01.927793469Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/14958bae78d39469fc1fc1f95bca29ccbf6b7db0ff36525ccb2480de8418941e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/14958bae78d39469fc1fc1f95bca29ccbf6b7db0ff36525ccb2480de8418941e/hostname",
	        "HostsPath": "/var/lib/docker/containers/14958bae78d39469fc1fc1f95bca29ccbf6b7db0ff36525ccb2480de8418941e/hosts",
	        "LogPath": "/var/lib/docker/containers/14958bae78d39469fc1fc1f95bca29ccbf6b7db0ff36525ccb2480de8418941e/14958bae78d39469fc1fc1f95bca29ccbf6b7db0ff36525ccb2480de8418941e-json.log",
	        "Name": "/newest-cni-218688",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-218688:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-218688",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "14958bae78d39469fc1fc1f95bca29ccbf6b7db0ff36525ccb2480de8418941e",
	                "LowerDir": "/var/lib/docker/overlay2/31b2476d0f6ff5b94417c3ab5d997fc6f8760ed556372206950721b79dd71892-init/diff:/var/lib/docker/overlay2/b62e2f8db4877fd6b32453256d2aeab173581bfdfbed6c87a5c3b6dd49dbb983/diff",
	                "MergedDir": "/var/lib/docker/overlay2/31b2476d0f6ff5b94417c3ab5d997fc6f8760ed556372206950721b79dd71892/merged",
	                "UpperDir": "/var/lib/docker/overlay2/31b2476d0f6ff5b94417c3ab5d997fc6f8760ed556372206950721b79dd71892/diff",
	                "WorkDir": "/var/lib/docker/overlay2/31b2476d0f6ff5b94417c3ab5d997fc6f8760ed556372206950721b79dd71892/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-218688",
	                "Source": "/var/lib/docker/volumes/newest-cni-218688/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-218688",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-218688",
	                "name.minikube.sigs.k8s.io": "newest-cni-218688",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a5b0c938a5bb10b4c9dcc5fb4a2ac6b945ccea599ad47fff9726c7fb27cf6e69",
	            "SandboxKey": "/var/run/docker/netns/a5b0c938a5bb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-218688": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1445af997c5684ad6e249fa16e019df4a952bdc0bbb87997d65034a6fd60980c",
	                    "EndpointID": "1ecf3b62c29408a699295631108336816b01129b1f67fdf0d1d897cf4a16a7ad",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "7a:2a:03:bd:6f:4c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-218688",
	                        "14958bae78d3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-218688 -n newest-cni-218688
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-218688 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ ssh     │ -p bridge-094798 sudo cat /etc/containerd/config.toml                                                                                                                                                                                              │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo containerd config dump                                                                                                                                                                                                       │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo systemctl cat crio --no-pager                                                                                                                                                                                                │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                      │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ ssh     │ -p bridge-094798 sudo crio config                                                                                                                                                                                                                  │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ delete  │ -p bridge-094798                                                                                                                                                                                                                                   │ bridge-094798                │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ delete  │ -p disable-driver-mounts-569732                                                                                                                                                                                                                    │ disable-driver-mounts-569732 │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ start   │ -p default-k8s-diff-port-125336 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable metrics-server -p no-preload-468539 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ stop    │ -p no-preload-468539 --alsologtostderr -v=3                                                                                                                                                                                                        │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ addons  │ enable metrics-server -p embed-certs-028500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ stop    │ -p embed-certs-028500 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ addons  │ enable dashboard -p no-preload-468539 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ start   │ -p no-preload-468539 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ image   │ old-k8s-version-725426 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ pause   │ -p old-k8s-version-725426 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ delete  │ -p old-k8s-version-725426                                                                                                                                                                                                                          │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ delete  │ -p old-k8s-version-725426                                                                                                                                                                                                                          │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ start   │ -p newest-cni-218688 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable dashboard -p embed-certs-028500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ start   │ -p embed-certs-028500 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-125336 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-125336 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-218688 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:14:57
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:14:57.244539  389191 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:14:57.244673  389191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:14:57.244688  389191 out.go:374] Setting ErrFile to fd 2...
	I1210 06:14:57.244695  389191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:14:57.245001  389191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 06:14:57.245593  389191 out.go:368] Setting JSON to false
	I1210 06:14:57.247197  389191 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3441,"bootTime":1765343856,"procs":419,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:14:57.247274  389191 start.go:143] virtualization: kvm guest
	I1210 06:14:57.252874  389191 out.go:179] * [embed-certs-028500] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:14:57.255011  389191 notify.go:221] Checking for updates...
	I1210 06:14:57.255717  389191 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:14:57.257330  389191 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:14:57.258824  389191 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:14:57.260271  389191 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube
	I1210 06:14:57.266331  389191 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:14:57.268173  389191 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:14:57.269975  389191 config.go:182] Loaded profile config "embed-certs-028500": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:14:57.270777  389191 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:14:57.300545  389191 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:14:57.300644  389191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:14:57.369334  389191 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:57 OomKillDisable:false NGoroutines:68 SystemTime:2025-12-10 06:14:57.356993167 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:14:57.369491  389191 docker.go:319] overlay module found
	I1210 06:14:57.373368  389191 out.go:179] * Using the docker driver based on existing profile
	I1210 06:14:57.374676  389191 start.go:309] selected driver: docker
	I1210 06:14:57.374705  389191 start.go:927] validating driver "docker" against &{Name:embed-certs-028500 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-028500 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:14:57.374820  389191 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:14:57.375704  389191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:14:57.447850  389191 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:68 SystemTime:2025-12-10 06:14:57.435495137 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:14:57.448243  389191 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:14:57.448287  389191 cni.go:84] Creating CNI manager for ""
	I1210 06:14:57.448362  389191 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:14:57.448422  389191 start.go:353] cluster config:
	{Name:embed-certs-028500 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-028500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:14:57.488130  389191 out.go:179] * Starting "embed-certs-028500" primary control-plane node in "embed-certs-028500" cluster
	I1210 06:14:57.490068  389191 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:14:57.491685  389191 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:14:57.493456  389191 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 06:14:57.493556  389191 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	W1210 06:14:57.519744  389191 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1210 06:14:57.522000  389191 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:14:57.522027  389191 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 06:14:57.607817  389191 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1210 06:14:57.608003  389191 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/embed-certs-028500/config.json ...
	I1210 06:14:57.608405  389191 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:14:57.608738  389191 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:14:57.608848  389191 start.go:360] acquireMachinesLock for embed-certs-028500: {Name:mk1cdfd1ea9c285bf25b2cff0c617487c1b93472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:57.609252  389191 start.go:364] duration metric: took 370.774µs to acquireMachinesLock for "embed-certs-028500"
	I1210 06:14:57.609298  389191 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:14:57.609306  389191 fix.go:54] fixHost starting: 
	I1210 06:14:57.609617  389191 cli_runner.go:164] Run: docker container inspect embed-certs-028500 --format={{.State.Status}}
	I1210 06:14:57.635212  389191 fix.go:112] recreateIfNeeded on embed-certs-028500: state=Stopped err=<nil>
	W1210 06:14:57.635250  389191 fix.go:138] unexpected machine state, will restart: <nil>
	W1210 06:14:56.513637  383776 pod_ready.go:104] pod "coredns-7d764666f9-tnm7t" is not "Ready", error: <nil>
	W1210 06:14:59.014488  383776 pod_ready.go:104] pod "coredns-7d764666f9-tnm7t" is not "Ready", error: <nil>
	W1210 06:14:58.050263  377144 node_ready.go:57] node "default-k8s-diff-port-125336" has "Ready":"False" status (will retry)
	W1210 06:15:00.546648  377144 node_ready.go:57] node "default-k8s-diff-port-125336" has "Ready":"False" status (will retry)
	I1210 06:15:01.552989  377144 node_ready.go:49] node "default-k8s-diff-port-125336" is "Ready"
	I1210 06:15:01.553023  377144 node_ready.go:38] duration metric: took 14.509783894s for node "default-k8s-diff-port-125336" to be "Ready" ...
	I1210 06:15:01.553042  377144 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:15:01.553114  377144 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:15:01.570326  377144 api_server.go:72] duration metric: took 14.851282275s to wait for apiserver process to appear ...
	I1210 06:15:01.570350  377144 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:15:01.570373  377144 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1210 06:15:01.576618  377144 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1210 06:15:01.577871  377144 api_server.go:141] control plane version: v1.34.3
	I1210 06:15:01.577893  377144 api_server.go:131] duration metric: took 7.536897ms to wait for apiserver health ...
	I1210 06:15:01.577912  377144 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:15:01.581618  377144 system_pods.go:59] 8 kube-system pods found
	I1210 06:15:01.581652  377144 system_pods.go:61] "coredns-66bc5c9577-gkk6m" [0b83f27c-1359-488f-bf61-c716f522dfad] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:15:01.581664  377144 system_pods.go:61] "etcd-default-k8s-diff-port-125336" [afbeb479-99ed-44cd-b9c3-cda0c638c270] Running
	I1210 06:15:01.581672  377144 system_pods.go:61] "kindnet-lfds9" [14d4cc08-bd99-41e5-a772-b5197e8b16b6] Running
	I1210 06:15:01.581677  377144 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-125336" [12a3028f-5f91-4217-bff2-527a5c4a0b4d] Running
	I1210 06:15:01.581683  377144 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-125336" [ee445b76-6256-4d08-a12d-c392acecca93] Running
	I1210 06:15:01.581688  377144 system_pods.go:61] "kube-proxy-mw5sp" [94c4f93c-3851-4ed9-ae3b-7900e64abf9f] Running
	I1210 06:15:01.581693  377144 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-125336" [f045b3cd-f095-44a0-9735-47a085eb5d83] Running
	I1210 06:15:01.581699  377144 system_pods.go:61] "storage-provisioner" [d31f981a-faff-40fd-87cd-c2e5b25f8e2a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:15:01.581708  377144 system_pods.go:74] duration metric: took 3.787481ms to wait for pod list to return data ...
	I1210 06:15:01.581717  377144 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:15:01.584444  377144 default_sa.go:45] found service account: "default"
	I1210 06:15:01.584463  377144 default_sa.go:55] duration metric: took 2.740448ms for default service account to be created ...
	I1210 06:15:01.584473  377144 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 06:15:01.587134  377144 system_pods.go:86] 8 kube-system pods found
	I1210 06:15:01.587156  377144 system_pods.go:89] "coredns-66bc5c9577-gkk6m" [0b83f27c-1359-488f-bf61-c716f522dfad] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:15:01.587168  377144 system_pods.go:89] "etcd-default-k8s-diff-port-125336" [afbeb479-99ed-44cd-b9c3-cda0c638c270] Running
	I1210 06:15:01.587176  377144 system_pods.go:89] "kindnet-lfds9" [14d4cc08-bd99-41e5-a772-b5197e8b16b6] Running
	I1210 06:15:01.587182  377144 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-125336" [12a3028f-5f91-4217-bff2-527a5c4a0b4d] Running
	I1210 06:15:01.587188  377144 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-125336" [ee445b76-6256-4d08-a12d-c392acecca93] Running
	I1210 06:15:01.587200  377144 system_pods.go:89] "kube-proxy-mw5sp" [94c4f93c-3851-4ed9-ae3b-7900e64abf9f] Running
	I1210 06:15:01.587206  377144 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-125336" [f045b3cd-f095-44a0-9735-47a085eb5d83] Running
	I1210 06:15:01.587226  377144 system_pods.go:89] "storage-provisioner" [d31f981a-faff-40fd-87cd-c2e5b25f8e2a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:15:01.587250  377144 retry.go:31] will retry after 220.063224ms: missing components: kube-dns
	I1210 06:14:56.986342  388833 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 06:14:56.986752  388833 start.go:159] libmachine.API.Create for "newest-cni-218688" (driver="docker")
	I1210 06:14:56.986797  388833 client.go:173] LocalClient.Create starting
	I1210 06:14:56.986894  388833 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem
	I1210 06:14:56.986932  388833 main.go:143] libmachine: Decoding PEM data...
	I1210 06:14:56.986954  388833 main.go:143] libmachine: Parsing certificate...
	I1210 06:14:56.987031  388833 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem
	I1210 06:14:56.987089  388833 main.go:143] libmachine: Decoding PEM data...
	I1210 06:14:56.987109  388833 main.go:143] libmachine: Parsing certificate...
	I1210 06:14:56.987565  388833 cli_runner.go:164] Run: docker network inspect newest-cni-218688 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 06:14:57.010491  388833 cli_runner.go:211] docker network inspect newest-cni-218688 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 06:14:57.010694  388833 network_create.go:284] running [docker network inspect newest-cni-218688] to gather additional debugging logs...
	I1210 06:14:57.010720  388833 cli_runner.go:164] Run: docker network inspect newest-cni-218688
	W1210 06:14:57.031777  388833 cli_runner.go:211] docker network inspect newest-cni-218688 returned with exit code 1
	I1210 06:14:57.031800  388833 network_create.go:287] error running [docker network inspect newest-cni-218688]: docker network inspect newest-cni-218688: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-218688 not found
	I1210 06:14:57.031809  388833 network_create.go:289] output of [docker network inspect newest-cni-218688]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-218688 not found
	
	** /stderr **
	I1210 06:14:57.031880  388833 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:14:57.053367  388833 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9ebf62c95cf7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d2:a8:ac:6e:16:1a} reservation:<nil>}
	I1210 06:14:57.054400  388833 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ad22705e186e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:52:8a:92:75:2c:7b} reservation:<nil>}
	I1210 06:14:57.055454  388833 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-782a6994f202 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3e:35:84:e8:81:18} reservation:<nil>}
	I1210 06:14:57.056624  388833 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d51550}
	I1210 06:14:57.056657  388833 network_create.go:124] attempt to create docker network newest-cni-218688 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1210 06:14:57.056718  388833 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-218688 newest-cni-218688
	I1210 06:14:57.119628  388833 network_create.go:108] docker network newest-cni-218688 192.168.76.0/24 created
	I1210 06:14:57.119662  388833 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-218688" container
	I1210 06:14:57.119732  388833 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 06:14:57.143498  388833 cli_runner.go:164] Run: docker volume create newest-cni-218688 --label name.minikube.sigs.k8s.io=newest-cni-218688 --label created_by.minikube.sigs.k8s.io=true
	I1210 06:14:57.165716  388833 oci.go:103] Successfully created a docker volume newest-cni-218688
	I1210 06:14:57.165800  388833 cli_runner.go:164] Run: docker run --rm --name newest-cni-218688-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-218688 --entrypoint /usr/bin/test -v newest-cni-218688:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 06:14:57.872793  388833 oci.go:107] Successfully prepared a docker volume newest-cni-218688
	I1210 06:14:57.872864  388833 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1210 06:14:57.872875  388833 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 06:14:57.872938  388833 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22094-5725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-218688:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 06:14:57.636612  389191 out.go:252] * Restarting existing docker container for "embed-certs-028500" ...
	I1210 06:14:57.636691  389191 cli_runner.go:164] Run: docker start embed-certs-028500
	I1210 06:14:57.780632  389191 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:14:57.947579  389191 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:14:57.975221  389191 cli_runner.go:164] Run: docker container inspect embed-certs-028500 --format={{.State.Status}}
	I1210 06:14:58.000753  389191 kic.go:430] container "embed-certs-028500" state is running.
	I1210 06:14:58.001215  389191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-028500
	I1210 06:14:58.025128  389191 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/embed-certs-028500/config.json ...
	I1210 06:14:58.025370  389191 machine.go:94] provisionDockerMachine start ...
	I1210 06:14:58.025441  389191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-028500
	I1210 06:14:58.049145  389191 main.go:143] libmachine: Using SSH client type: native
	I1210 06:14:58.049513  389191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1210 06:14:58.049528  389191 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:14:58.050364  389191 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39310->127.0.0.1:33123: read: connection reset by peer
	I1210 06:14:58.112870  389191 cache.go:107] acquiring lock: {Name:mkc3a95f67321b2fa8faeb966829fb60cf65d25d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:58.112983  389191 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 exists
	I1210 06:14:58.113000  389191 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3" took 135.292µs
	I1210 06:14:58.113016  389191 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 succeeded
	I1210 06:14:58.113037  389191 cache.go:107] acquiring lock: {Name:mkcb073544c2d92de0e0765e38c37b4f4d2ac46b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:58.113031  389191 cache.go:107] acquiring lock: {Name:mkd670cede0997c7eb0e9bd388a82e1cb2741031 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:58.113118  389191 cache.go:107] acquiring lock: {Name:mk4d792f4bac33dc8779d7cc5ff40393c94e0ea4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:58.113158  389191 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:14:58.113167  389191 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 exists
	I1210 06:14:58.113176  389191 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3" took 59.743µs
	I1210 06:14:58.113167  389191 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 160.676µs
	I1210 06:14:58.113184  389191 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 succeeded
	I1210 06:14:58.113186  389191 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:14:58.113202  389191 cache.go:107] acquiring lock: {Name:mk4839690ba979036496a7cee1de2814aaad3bf0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:58.113207  389191 cache.go:107] acquiring lock: {Name:mk796942baeaa838a47daad2be5ca7532234da42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:58.112867  389191 cache.go:107] acquiring lock: {Name:mk0763a50664c56b0862900e71862307cba94d4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:58.113255  389191 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:14:58.113263  389191 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 417.914µs
	I1210 06:14:58.113278  389191 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:14:58.113271  389191 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 exists
	I1210 06:14:58.113288  389191 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3" took 88.465µs
	I1210 06:14:58.113295  389191 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 succeeded
	I1210 06:14:58.113105  389191 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 exists
	I1210 06:14:58.113285  389191 cache.go:107] acquiring lock: {Name:mkdd768341d1a3481ecaec697219b32d4a715834 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:14:58.113305  389191 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3" took 270.517µs
	I1210 06:14:58.113312  389191 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 succeeded
	I1210 06:14:58.113330  389191 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1210 06:14:58.113337  389191 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 55.111µs
	I1210 06:14:58.113340  389191 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1210 06:14:58.113347  389191 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1210 06:14:58.113350  389191 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 146.007µs
	I1210 06:14:58.113357  389191 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1210 06:14:58.113369  389191 cache.go:87] Successfully saved all images to host disk.
	I1210 06:15:01.188191  389191 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-028500
	
	I1210 06:15:01.188219  389191 ubuntu.go:182] provisioning hostname "embed-certs-028500"
	I1210 06:15:01.188270  389191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-028500
	I1210 06:15:01.207561  389191 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:01.207777  389191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1210 06:15:01.207789  389191 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-028500 && echo "embed-certs-028500" | sudo tee /etc/hostname
	I1210 06:15:01.377128  389191 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-028500
	
	I1210 06:15:01.377211  389191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-028500
	I1210 06:15:01.398849  389191 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:01.399108  389191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1210 06:15:01.399132  389191 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-028500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-028500/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-028500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:15:01.535984  389191 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:15:01.536018  389191 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-5725/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-5725/.minikube}
	I1210 06:15:01.536060  389191 ubuntu.go:190] setting up certificates
	I1210 06:15:01.536106  389191 provision.go:84] configureAuth start
	I1210 06:15:01.536172  389191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-028500
	I1210 06:15:01.561659  389191 provision.go:143] copyHostCerts
	I1210 06:15:01.561742  389191 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem, removing ...
	I1210 06:15:01.561762  389191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem
	I1210 06:15:01.561834  389191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem (1123 bytes)
	I1210 06:15:01.561968  389191 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem, removing ...
	I1210 06:15:01.561982  389191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem
	I1210 06:15:01.562022  389191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem (1679 bytes)
	I1210 06:15:01.562514  389191 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem, removing ...
	I1210 06:15:01.562537  389191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem
	I1210 06:15:01.562588  389191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem (1078 bytes)
	I1210 06:15:01.562716  389191 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem org=jenkins.embed-certs-028500 san=[127.0.0.1 192.168.85.2 embed-certs-028500 localhost minikube]
	I1210 06:15:02.084445  389191 provision.go:177] copyRemoteCerts
	I1210 06:15:02.084526  389191 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:15:02.084586  389191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-028500
	I1210 06:15:02.107807  389191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/embed-certs-028500/id_rsa Username:docker}
	I1210 06:15:02.212977  389191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:15:02.236198  389191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:15:02.258387  389191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 06:15:02.281338  389191 provision.go:87] duration metric: took 745.196481ms to configureAuth
	I1210 06:15:02.281368  389191 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:15:02.281583  389191 config.go:182] Loaded profile config "embed-certs-028500": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:15:02.281692  389191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-028500
	I1210 06:15:02.306737  389191 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:02.306957  389191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1210 06:15:02.306969  389191 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:15:02.915340  389191 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:15:02.915367  389191 machine.go:97] duration metric: took 4.889981384s to provisionDockerMachine
	I1210 06:15:02.915382  389191 start.go:293] postStartSetup for "embed-certs-028500" (driver="docker")
	I1210 06:15:02.915396  389191 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:15:02.915456  389191 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:15:02.915508  389191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-028500
	I1210 06:15:02.937476  389191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/embed-certs-028500/id_rsa Username:docker}
	I1210 06:15:03.043238  389191 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:15:03.047553  389191 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:15:03.047582  389191 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:15:03.047595  389191 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/addons for local assets ...
	I1210 06:15:03.047664  389191 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/files for local assets ...
	I1210 06:15:03.047768  389191 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem -> 92532.pem in /etc/ssl/certs
	I1210 06:15:03.047894  389191 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:15:03.055892  389191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:15:03.077201  389191 start.go:296] duration metric: took 161.803141ms for postStartSetup
	I1210 06:15:03.077285  389191 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:15:03.077339  389191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-028500
	I1210 06:15:03.097852  389191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/embed-certs-028500/id_rsa Username:docker}
	I1210 06:15:03.194550  389191 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:15:03.199729  389191 fix.go:56] duration metric: took 5.590414431s for fixHost
	I1210 06:15:03.199755  389191 start.go:83] releasing machines lock for "embed-certs-028500", held for 5.590466192s
	I1210 06:15:03.199824  389191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-028500
	I1210 06:15:03.217598  389191 ssh_runner.go:195] Run: cat /version.json
	I1210 06:15:03.217650  389191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-028500
	I1210 06:15:03.217691  389191 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:15:03.217775  389191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-028500
	I1210 06:15:03.235590  389191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/embed-certs-028500/id_rsa Username:docker}
	I1210 06:15:03.236696  389191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/embed-certs-028500/id_rsa Username:docker}
	I1210 06:15:03.326722  389191 ssh_runner.go:195] Run: systemctl --version
	I1210 06:15:03.383536  389191 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:15:03.417425  389191 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:15:03.421757  389191 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:15:03.421822  389191 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:15:03.430311  389191 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:15:03.430331  389191 start.go:496] detecting cgroup driver to use...
	I1210 06:15:03.430361  389191 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:15:03.430406  389191 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:15:03.444196  389191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:15:03.455486  389191 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:15:03.455524  389191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:15:03.468870  389191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:15:03.480337  389191 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:15:03.561138  389191 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:15:03.644816  389191 docker.go:234] disabling docker service ...
	I1210 06:15:03.644891  389191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:15:03.658552  389191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:15:03.670798  389191 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:15:03.759208  389191 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:15:03.844591  389191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:15:03.857559  389191 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:15:03.871674  389191 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:04.005035  389191 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:15:04.005112  389191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:04.015471  389191 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:15:04.015537  389191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:04.024208  389191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:04.032265  389191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:04.040744  389191 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:15:04.049019  389191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:04.058203  389191 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:04.066434  389191 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:04.074629  389191 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:15:04.081503  389191 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:15:04.088535  389191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:15:04.175868  389191 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:15:04.318209  389191 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:15:04.318273  389191 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:15:04.322046  389191 start.go:564] Will wait 60s for crictl version
	I1210 06:15:04.322135  389191 ssh_runner.go:195] Run: which crictl
	I1210 06:15:04.325555  389191 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:15:04.350000  389191 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:15:04.350072  389191 ssh_runner.go:195] Run: crio --version
	I1210 06:15:04.384274  389191 ssh_runner.go:195] Run: crio --version
	I1210 06:15:04.413587  389191 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1210 06:15:01.813507  377144 system_pods.go:86] 8 kube-system pods found
	I1210 06:15:01.813545  377144 system_pods.go:89] "coredns-66bc5c9577-gkk6m" [0b83f27c-1359-488f-bf61-c716f522dfad] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:15:01.813554  377144 system_pods.go:89] "etcd-default-k8s-diff-port-125336" [afbeb479-99ed-44cd-b9c3-cda0c638c270] Running
	I1210 06:15:01.813562  377144 system_pods.go:89] "kindnet-lfds9" [14d4cc08-bd99-41e5-a772-b5197e8b16b6] Running
	I1210 06:15:01.813569  377144 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-125336" [12a3028f-5f91-4217-bff2-527a5c4a0b4d] Running
	I1210 06:15:01.813575  377144 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-125336" [ee445b76-6256-4d08-a12d-c392acecca93] Running
	I1210 06:15:01.813580  377144 system_pods.go:89] "kube-proxy-mw5sp" [94c4f93c-3851-4ed9-ae3b-7900e64abf9f] Running
	I1210 06:15:01.813586  377144 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-125336" [f045b3cd-f095-44a0-9735-47a085eb5d83] Running
	I1210 06:15:01.813593  377144 system_pods.go:89] "storage-provisioner" [d31f981a-faff-40fd-87cd-c2e5b25f8e2a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:15:01.813610  377144 retry.go:31] will retry after 267.505742ms: missing components: kube-dns
	I1210 06:15:02.087578  377144 system_pods.go:86] 8 kube-system pods found
	I1210 06:15:02.087615  377144 system_pods.go:89] "coredns-66bc5c9577-gkk6m" [0b83f27c-1359-488f-bf61-c716f522dfad] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:15:02.087622  377144 system_pods.go:89] "etcd-default-k8s-diff-port-125336" [afbeb479-99ed-44cd-b9c3-cda0c638c270] Running
	I1210 06:15:02.087630  377144 system_pods.go:89] "kindnet-lfds9" [14d4cc08-bd99-41e5-a772-b5197e8b16b6] Running
	I1210 06:15:02.087636  377144 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-125336" [12a3028f-5f91-4217-bff2-527a5c4a0b4d] Running
	I1210 06:15:02.087641  377144 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-125336" [ee445b76-6256-4d08-a12d-c392acecca93] Running
	I1210 06:15:02.087647  377144 system_pods.go:89] "kube-proxy-mw5sp" [94c4f93c-3851-4ed9-ae3b-7900e64abf9f] Running
	I1210 06:15:02.087652  377144 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-125336" [f045b3cd-f095-44a0-9735-47a085eb5d83] Running
	I1210 06:15:02.087659  377144 system_pods.go:89] "storage-provisioner" [d31f981a-faff-40fd-87cd-c2e5b25f8e2a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:15:02.087681  377144 retry.go:31] will retry after 478.628156ms: missing components: kube-dns
	I1210 06:15:02.573126  377144 system_pods.go:86] 8 kube-system pods found
	I1210 06:15:02.573163  377144 system_pods.go:89] "coredns-66bc5c9577-gkk6m" [0b83f27c-1359-488f-bf61-c716f522dfad] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:15:02.573171  377144 system_pods.go:89] "etcd-default-k8s-diff-port-125336" [afbeb479-99ed-44cd-b9c3-cda0c638c270] Running
	I1210 06:15:02.573180  377144 system_pods.go:89] "kindnet-lfds9" [14d4cc08-bd99-41e5-a772-b5197e8b16b6] Running
	I1210 06:15:02.573186  377144 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-125336" [12a3028f-5f91-4217-bff2-527a5c4a0b4d] Running
	I1210 06:15:02.573192  377144 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-125336" [ee445b76-6256-4d08-a12d-c392acecca93] Running
	I1210 06:15:02.573198  377144 system_pods.go:89] "kube-proxy-mw5sp" [94c4f93c-3851-4ed9-ae3b-7900e64abf9f] Running
	I1210 06:15:02.573203  377144 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-125336" [f045b3cd-f095-44a0-9735-47a085eb5d83] Running
	I1210 06:15:02.573211  377144 system_pods.go:89] "storage-provisioner" [d31f981a-faff-40fd-87cd-c2e5b25f8e2a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:15:02.573229  377144 retry.go:31] will retry after 580.697416ms: missing components: kube-dns
	I1210 06:15:03.157505  377144 system_pods.go:86] 8 kube-system pods found
	I1210 06:15:03.157531  377144 system_pods.go:89] "coredns-66bc5c9577-gkk6m" [0b83f27c-1359-488f-bf61-c716f522dfad] Running
	I1210 06:15:03.157543  377144 system_pods.go:89] "etcd-default-k8s-diff-port-125336" [afbeb479-99ed-44cd-b9c3-cda0c638c270] Running
	I1210 06:15:03.157547  377144 system_pods.go:89] "kindnet-lfds9" [14d4cc08-bd99-41e5-a772-b5197e8b16b6] Running
	I1210 06:15:03.157551  377144 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-125336" [12a3028f-5f91-4217-bff2-527a5c4a0b4d] Running
	I1210 06:15:03.157554  377144 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-125336" [ee445b76-6256-4d08-a12d-c392acecca93] Running
	I1210 06:15:03.157557  377144 system_pods.go:89] "kube-proxy-mw5sp" [94c4f93c-3851-4ed9-ae3b-7900e64abf9f] Running
	I1210 06:15:03.157562  377144 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-125336" [f045b3cd-f095-44a0-9735-47a085eb5d83] Running
	I1210 06:15:03.157565  377144 system_pods.go:89] "storage-provisioner" [d31f981a-faff-40fd-87cd-c2e5b25f8e2a] Running
	I1210 06:15:03.157572  377144 system_pods.go:126] duration metric: took 1.573093393s to wait for k8s-apps to be running ...
	I1210 06:15:03.157583  377144 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 06:15:03.157617  377144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:15:03.170633  377144 system_svc.go:56] duration metric: took 13.042071ms WaitForService to wait for kubelet
	I1210 06:15:03.170659  377144 kubeadm.go:587] duration metric: took 16.451621166s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:15:03.170679  377144 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:15:03.173392  377144 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:15:03.173416  377144 node_conditions.go:123] node cpu capacity is 8
	I1210 06:15:03.173437  377144 node_conditions.go:105] duration metric: took 2.752307ms to run NodePressure ...
	I1210 06:15:03.173453  377144 start.go:242] waiting for startup goroutines ...
	I1210 06:15:03.173467  377144 start.go:247] waiting for cluster config update ...
	I1210 06:15:03.173484  377144 start.go:256] writing updated cluster config ...
	I1210 06:15:03.173708  377144 ssh_runner.go:195] Run: rm -f paused
	I1210 06:15:03.177585  377144 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:15:03.180811  377144 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gkk6m" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:03.184797  377144 pod_ready.go:94] pod "coredns-66bc5c9577-gkk6m" is "Ready"
	I1210 06:15:03.184817  377144 pod_ready.go:86] duration metric: took 3.988409ms for pod "coredns-66bc5c9577-gkk6m" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:03.186688  377144 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-125336" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:03.190479  377144 pod_ready.go:94] pod "etcd-default-k8s-diff-port-125336" is "Ready"
	I1210 06:15:03.190499  377144 pod_ready.go:86] duration metric: took 3.796111ms for pod "etcd-default-k8s-diff-port-125336" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:03.192350  377144 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-125336" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:03.196047  377144 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-125336" is "Ready"
	I1210 06:15:03.196066  377144 pod_ready.go:86] duration metric: took 3.6949ms for pod "kube-apiserver-default-k8s-diff-port-125336" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:03.197918  377144 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-125336" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:03.581747  377144 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-125336" is "Ready"
	I1210 06:15:03.581771  377144 pod_ready.go:86] duration metric: took 383.835581ms for pod "kube-controller-manager-default-k8s-diff-port-125336" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:03.781884  377144 pod_ready.go:83] waiting for pod "kube-proxy-mw5sp" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:04.182572  377144 pod_ready.go:94] pod "kube-proxy-mw5sp" is "Ready"
	I1210 06:15:04.182595  377144 pod_ready.go:86] duration metric: took 400.6856ms for pod "kube-proxy-mw5sp" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:04.382339  377144 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-125336" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:04.781400  377144 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-125336" is "Ready"
	I1210 06:15:04.781429  377144 pod_ready.go:86] duration metric: took 399.064273ms for pod "kube-scheduler-default-k8s-diff-port-125336" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:04.781443  377144 pod_ready.go:40] duration metric: took 1.603830719s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:15:04.824049  377144 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1210 06:15:04.826123  377144 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-125336" cluster and "default" namespace by default
	I1210 06:15:01.774377  388833 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22094-5725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-218688:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.901376892s)
	I1210 06:15:01.774418  388833 kic.go:203] duration metric: took 3.901537573s to extract preloaded images to volume ...
	W1210 06:15:01.774508  388833 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1210 06:15:01.774557  388833 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1210 06:15:01.774606  388833 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 06:15:01.855535  388833 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-218688 --name newest-cni-218688 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-218688 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-218688 --network newest-cni-218688 --ip 192.168.76.2 --volume newest-cni-218688:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 06:15:02.202973  388833 cli_runner.go:164] Run: docker container inspect newest-cni-218688 --format={{.State.Running}}
	I1210 06:15:02.227253  388833 cli_runner.go:164] Run: docker container inspect newest-cni-218688 --format={{.State.Status}}
	I1210 06:15:02.250326  388833 cli_runner.go:164] Run: docker exec newest-cni-218688 stat /var/lib/dpkg/alternatives/iptables
	I1210 06:15:02.306306  388833 oci.go:144] the created container "newest-cni-218688" has a running status.
	I1210 06:15:02.306350  388833 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa...
	I1210 06:15:02.429540  388833 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 06:15:02.461974  388833 cli_runner.go:164] Run: docker container inspect newest-cni-218688 --format={{.State.Status}}
	I1210 06:15:02.486892  388833 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 06:15:02.486911  388833 kic_runner.go:114] Args: [docker exec --privileged newest-cni-218688 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 06:15:02.542227  388833 cli_runner.go:164] Run: docker container inspect newest-cni-218688 --format={{.State.Status}}
	I1210 06:15:02.571238  388833 machine.go:94] provisionDockerMachine start ...
	I1210 06:15:02.571403  388833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:02.598485  388833 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:02.598828  388833 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1210 06:15:02.598849  388833 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:15:02.744672  388833 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-218688
	
	I1210 06:15:02.744725  388833 ubuntu.go:182] provisioning hostname "newest-cni-218688"
	I1210 06:15:02.744795  388833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:02.763519  388833 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:02.763851  388833 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1210 06:15:02.763868  388833 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-218688 && echo "newest-cni-218688" | sudo tee /etc/hostname
	I1210 06:15:02.922141  388833 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-218688
	
	I1210 06:15:02.922238  388833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:02.944060  388833 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:02.944382  388833 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1210 06:15:02.944425  388833 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-218688' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-218688/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-218688' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:15:03.084245  388833 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:15:03.084286  388833 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-5725/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-5725/.minikube}
	I1210 06:15:03.084312  388833 ubuntu.go:190] setting up certificates
	I1210 06:15:03.084325  388833 provision.go:84] configureAuth start
	I1210 06:15:03.084408  388833 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-218688
	I1210 06:15:03.104053  388833 provision.go:143] copyHostCerts
	I1210 06:15:03.104182  388833 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem, removing ...
	I1210 06:15:03.104194  388833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem
	I1210 06:15:03.104263  388833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem (1679 bytes)
	I1210 06:15:03.104384  388833 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem, removing ...
	I1210 06:15:03.104396  388833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem
	I1210 06:15:03.104438  388833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem (1078 bytes)
	I1210 06:15:03.104538  388833 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem, removing ...
	I1210 06:15:03.104550  388833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem
	I1210 06:15:03.104594  388833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem (1123 bytes)
	I1210 06:15:03.104759  388833 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem org=jenkins.newest-cni-218688 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-218688]
	I1210 06:15:03.165746  388833 provision.go:177] copyRemoteCerts
	I1210 06:15:03.165794  388833 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:15:03.165834  388833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:03.185008  388833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:03.285811  388833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:15:03.304179  388833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:15:03.320430  388833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 06:15:03.337295  388833 provision.go:87] duration metric: took 252.946383ms to configureAuth
	I1210 06:15:03.337316  388833 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:15:03.337491  388833 config.go:182] Loaded profile config "newest-cni-218688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:15:03.337578  388833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:03.356119  388833 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:03.356311  388833 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1210 06:15:03.356332  388833 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:15:03.628161  388833 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:15:03.628184  388833 machine.go:97] duration metric: took 1.056870819s to provisionDockerMachine
	I1210 06:15:03.628194  388833 client.go:176] duration metric: took 6.641388389s to LocalClient.Create
	I1210 06:15:03.628213  388833 start.go:167] duration metric: took 6.641463566s to libmachine.API.Create "newest-cni-218688"
	I1210 06:15:03.628219  388833 start.go:293] postStartSetup for "newest-cni-218688" (driver="docker")
	I1210 06:15:03.628231  388833 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:15:03.628294  388833 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:15:03.628335  388833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:03.649310  388833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:03.755171  388833 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:15:03.758919  388833 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:15:03.758945  388833 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:15:03.758960  388833 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/addons for local assets ...
	I1210 06:15:03.759010  388833 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/files for local assets ...
	I1210 06:15:03.759117  388833 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem -> 92532.pem in /etc/ssl/certs
	I1210 06:15:03.759249  388833 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:15:03.766797  388833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:15:03.789487  388833 start.go:296] duration metric: took 161.255283ms for postStartSetup
	I1210 06:15:03.789902  388833 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-218688
	I1210 06:15:03.810321  388833 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/config.json ...
	I1210 06:15:03.810624  388833 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:15:03.810669  388833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:03.827235  388833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:03.920691  388833 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:15:03.925443  388833 start.go:128] duration metric: took 6.940841686s to createHost
	I1210 06:15:03.925465  388833 start.go:83] releasing machines lock for "newest-cni-218688", held for 6.940986965s
	I1210 06:15:03.925538  388833 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-218688
	I1210 06:15:03.943157  388833 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:15:03.943226  388833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:03.943161  388833 ssh_runner.go:195] Run: cat /version.json
	I1210 06:15:03.943295  388833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:03.962106  388833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:03.962256  388833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:04.110257  388833 ssh_runner.go:195] Run: systemctl --version
	I1210 06:15:04.116480  388833 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:15:04.155253  388833 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:15:04.159715  388833 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:15:04.159781  388833 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:15:04.185182  388833 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 06:15:04.185202  388833 start.go:496] detecting cgroup driver to use...
	I1210 06:15:04.185233  388833 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:15:04.185285  388833 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:15:04.204011  388833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:15:04.215519  388833 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:15:04.215578  388833 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:15:04.232898  388833 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:15:04.250071  388833 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:15:04.332823  388833 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:15:04.424819  388833 docker.go:234] disabling docker service ...
	I1210 06:15:04.424881  388833 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:15:04.443819  388833 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:15:04.456381  388833 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:15:04.543676  388833 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:15:04.624401  388833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:15:04.637141  388833 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:15:04.651674  388833 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:04.788900  388833 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:15:04.788963  388833 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:04.800116  388833 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:15:04.800180  388833 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:04.808891  388833 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:04.817843  388833 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:04.826902  388833 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:15:04.835259  388833 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:04.847378  388833 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:04.863021  388833 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:04.872831  388833 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:15:04.881896  388833 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:15:04.889969  388833 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:15:04.983338  388833 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:15:05.129757  388833 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:15:05.129815  388833 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:15:05.134191  388833 start.go:564] Will wait 60s for crictl version
	I1210 06:15:05.134242  388833 ssh_runner.go:195] Run: which crictl
	I1210 06:15:05.138815  388833 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:15:05.165685  388833 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:15:05.165780  388833 ssh_runner.go:195] Run: crio --version
	I1210 06:15:05.201406  388833 ssh_runner.go:195] Run: crio --version
	I1210 06:15:05.236027  388833 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1210 06:15:05.237116  388833 cli_runner.go:164] Run: docker network inspect newest-cni-218688 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:15:05.254613  388833 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 06:15:05.258586  388833 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:15:05.270410  388833 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 06:15:04.414620  389191 cli_runner.go:164] Run: docker network inspect embed-certs-028500 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:15:04.432200  389191 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1210 06:15:04.436064  389191 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:15:04.446641  389191 kubeadm.go:884] updating cluster {Name:embed-certs-028500 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-028500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:15:04.446840  389191 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:04.588043  389191 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:04.719419  389191 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:04.848978  389191 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 06:15:04.849031  389191 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:15:04.884668  389191 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:15:04.884691  389191 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:15:04.884712  389191 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.3 crio true true} ...
	I1210 06:15:04.884830  389191 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-028500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:embed-certs-028500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:15:04.884901  389191 ssh_runner.go:195] Run: crio config
	I1210 06:15:04.946387  389191 cni.go:84] Creating CNI manager for ""
	I1210 06:15:04.946417  389191 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:15:04.946435  389191 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:15:04.946467  389191 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-028500 NodeName:embed-certs-028500 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:15:04.946650  389191 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-028500"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:15:04.946731  389191 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1210 06:15:04.954905  389191 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:15:04.954966  389191 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:15:04.962457  389191 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1210 06:15:04.975335  389191 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 06:15:04.990854  389191 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1210 06:15:05.006024  389191 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:15:05.009959  389191 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:15:05.020134  389191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:15:05.100962  389191 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:15:05.121314  389191 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/embed-certs-028500 for IP: 192.168.85.2
	I1210 06:15:05.121332  389191 certs.go:195] generating shared ca certs ...
	I1210 06:15:05.121347  389191 certs.go:227] acquiring lock for ca certs: {Name:mka90f54d579d39a8508aa46a6cef002ccad5d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:05.121474  389191 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key
	I1210 06:15:05.121523  389191 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key
	I1210 06:15:05.121539  389191 certs.go:257] generating profile certs ...
	I1210 06:15:05.121619  389191 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/embed-certs-028500/client.key
	I1210 06:15:05.121671  389191 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/embed-certs-028500/apiserver.key.486bf2a6
	I1210 06:15:05.121705  389191 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/embed-certs-028500/proxy-client.key
	I1210 06:15:05.121809  389191 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253.pem (1338 bytes)
	W1210 06:15:05.121841  389191 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253_empty.pem, impossibly tiny 0 bytes
	I1210 06:15:05.121850  389191 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:15:05.121875  389191 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:15:05.121900  389191 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:15:05.121923  389191 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem (1679 bytes)
	I1210 06:15:05.121963  389191 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:15:05.122577  389191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:15:05.141596  389191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:15:05.160914  389191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:15:05.181308  389191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:15:05.208001  389191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/embed-certs-028500/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1210 06:15:05.227185  389191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/embed-certs-028500/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:15:05.245694  389191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/embed-certs-028500/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:15:05.264158  389191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/embed-certs-028500/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 06:15:05.280978  389191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253.pem --> /usr/share/ca-certificates/9253.pem (1338 bytes)
	I1210 06:15:05.299369  389191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /usr/share/ca-certificates/92532.pem (1708 bytes)
	I1210 06:15:05.320458  389191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:15:05.338793  389191 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:15:05.351139  389191 ssh_runner.go:195] Run: openssl version
	I1210 06:15:05.357219  389191 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:05.364534  389191 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:15:05.371719  389191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:05.375174  389191 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:05.375226  389191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:05.410357  389191 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:15:05.417640  389191 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9253.pem
	I1210 06:15:05.425128  389191 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9253.pem /etc/ssl/certs/9253.pem
	I1210 06:15:05.433281  389191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9253.pem
	I1210 06:15:05.437140  389191 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:37 /usr/share/ca-certificates/9253.pem
	I1210 06:15:05.437189  389191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9253.pem
	I1210 06:15:05.473390  389191 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:15:05.480874  389191 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/92532.pem
	I1210 06:15:05.488264  389191 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/92532.pem /etc/ssl/certs/92532.pem
	I1210 06:15:05.495621  389191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92532.pem
	I1210 06:15:05.499112  389191 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:37 /usr/share/ca-certificates/92532.pem
	I1210 06:15:05.499150  389191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92532.pem
	I1210 06:15:05.535470  389191 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:15:05.542508  389191 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:15:05.545871  389191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:15:05.584122  389191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:15:05.620442  389191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:15:05.664709  389191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:15:05.714954  389191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:15:05.771180  389191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:15:05.826933  389191 kubeadm.go:401] StartCluster: {Name:embed-certs-028500 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-028500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:15:05.827043  389191 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:15:05.827162  389191 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:15:05.865208  389191 cri.go:89] found id: "8cb47732447e77b684b839f080aeb3be30b5387c9465db5c1669dcfea49925dd"
	I1210 06:15:05.865233  389191 cri.go:89] found id: "9448aac68883a9dd13bef51e8981f7e636bdfe00fb0ac6083393a0705758776b"
	I1210 06:15:05.865248  389191 cri.go:89] found id: "f02f944bc389eec54d2261f9fd7c4019496559a482a7c7606927c07257c7d803"
	I1210 06:15:05.865255  389191 cri.go:89] found id: "6ef9ca2b457b0540ee957485c2781b7054801e8cedcfebc48356c9df7479410e"
	I1210 06:15:05.865259  389191 cri.go:89] found id: ""
	I1210 06:15:05.865302  389191 ssh_runner.go:195] Run: sudo runc list -f json
	W1210 06:15:05.882734  389191 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:15:05Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:15:05.882826  389191 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:15:05.893263  389191 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:15:05.893280  389191 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:15:05.893336  389191 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:15:05.902726  389191 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:15:05.903775  389191 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-028500" does not appear in /home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:15:05.904283  389191 kubeconfig.go:62] /home/jenkins/minikube-integration/22094-5725/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-028500" cluster setting kubeconfig missing "embed-certs-028500" context setting]
	I1210 06:15:05.905140  389191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/kubeconfig: {Name:mkfa60e97179780abb80cddf99075aed36301884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:05.907660  389191 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:15:05.917738  389191 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1210 06:15:05.917767  389191 kubeadm.go:602] duration metric: took 24.481301ms to restartPrimaryControlPlane
	I1210 06:15:05.917776  389191 kubeadm.go:403] duration metric: took 90.8536ms to StartCluster
	I1210 06:15:05.917793  389191 settings.go:142] acquiring lock: {Name:mk8c38e27b37253ca8cb2a2adf6342f0db270902 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:05.917851  389191 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:15:05.920418  389191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/kubeconfig: {Name:mkfa60e97179780abb80cddf99075aed36301884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:05.920638  389191 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:15:05.920968  389191 config.go:182] Loaded profile config "embed-certs-028500": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:15:05.921014  389191 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:15:05.921134  389191 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-028500"
	I1210 06:15:05.921154  389191 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-028500"
	W1210 06:15:05.921162  389191 addons.go:248] addon storage-provisioner should already be in state true
	I1210 06:15:05.921185  389191 host.go:66] Checking if "embed-certs-028500" exists ...
	I1210 06:15:05.921643  389191 cli_runner.go:164] Run: docker container inspect embed-certs-028500 --format={{.State.Status}}
	I1210 06:15:05.921987  389191 addons.go:70] Setting dashboard=true in profile "embed-certs-028500"
	I1210 06:15:05.922005  389191 addons.go:239] Setting addon dashboard=true in "embed-certs-028500"
	I1210 06:15:05.922002  389191 addons.go:70] Setting default-storageclass=true in profile "embed-certs-028500"
	W1210 06:15:05.922013  389191 addons.go:248] addon dashboard should already be in state true
	I1210 06:15:05.922037  389191 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-028500"
	I1210 06:15:05.922040  389191 host.go:66] Checking if "embed-certs-028500" exists ...
	I1210 06:15:05.922592  389191 cli_runner.go:164] Run: docker container inspect embed-certs-028500 --format={{.State.Status}}
	I1210 06:15:05.922604  389191 cli_runner.go:164] Run: docker container inspect embed-certs-028500 --format={{.State.Status}}
	I1210 06:15:05.923751  389191 out.go:179] * Verifying Kubernetes components...
	I1210 06:15:05.925356  389191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:15:05.949696  389191 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:15:05.951394  389191 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:15:05.951415  389191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:15:05.951476  389191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-028500
	I1210 06:15:05.956475  389191 addons.go:239] Setting addon default-storageclass=true in "embed-certs-028500"
	W1210 06:15:05.956721  389191 addons.go:248] addon default-storageclass should already be in state true
	I1210 06:15:05.956760  389191 host.go:66] Checking if "embed-certs-028500" exists ...
	I1210 06:15:05.957471  389191 cli_runner.go:164] Run: docker container inspect embed-certs-028500 --format={{.State.Status}}
	I1210 06:15:05.958852  389191 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 06:15:05.960610  389191 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1210 06:15:01.523201  383776 pod_ready.go:104] pod "coredns-7d764666f9-tnm7t" is not "Ready", error: <nil>
	W1210 06:15:04.013532  383776 pod_ready.go:104] pod "coredns-7d764666f9-tnm7t" is not "Ready", error: <nil>
	W1210 06:15:06.018128  383776 pod_ready.go:104] pod "coredns-7d764666f9-tnm7t" is not "Ready", error: <nil>
	I1210 06:15:05.271286  388833 kubeadm.go:884] updating cluster {Name:newest-cni-218688 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-218688 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:15:05.271475  388833 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:05.418692  388833 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:05.554875  388833 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:05.687954  388833 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1210 06:15:05.688074  388833 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:05.855972  388833 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:06.030326  388833 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:06.187905  388833 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:15:06.229599  388833 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:15:06.229624  388833 crio.go:433] Images already preloaded, skipping extraction
	I1210 06:15:06.229674  388833 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:15:06.260864  388833 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:15:06.260884  388833 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:15:06.260894  388833 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 crio true true} ...
	I1210 06:15:06.261000  388833 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-218688 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-218688 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:15:06.261100  388833 ssh_runner.go:195] Run: crio config
	I1210 06:15:06.316028  388833 cni.go:84] Creating CNI manager for ""
	I1210 06:15:06.316052  388833 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:15:06.316074  388833 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1210 06:15:06.316129  388833 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-218688 NodeName:newest-cni-218688 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:15:06.316337  388833 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-218688"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:15:06.316418  388833 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 06:15:06.326026  388833 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:15:06.326128  388833 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:15:06.335267  388833 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1210 06:15:06.348991  388833 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 06:15:06.363704  388833 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1210 06:15:06.376224  388833 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:15:06.379920  388833 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:15:06.390442  388833 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:15:06.472469  388833 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:15:06.503559  388833 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688 for IP: 192.168.76.2
	I1210 06:15:06.503579  388833 certs.go:195] generating shared ca certs ...
	I1210 06:15:06.503599  388833 certs.go:227] acquiring lock for ca certs: {Name:mka90f54d579d39a8508aa46a6cef002ccad5d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:06.503752  388833 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key
	I1210 06:15:06.503814  388833 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key
	I1210 06:15:06.503826  388833 certs.go:257] generating profile certs ...
	I1210 06:15:06.503889  388833 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/client.key
	I1210 06:15:06.503905  388833 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/client.crt with IP's: []
	I1210 06:15:06.636967  388833 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/client.crt ...
	I1210 06:15:06.637060  388833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/client.crt: {Name:mk7be03596a45014268417f3b356393146a5f5d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:06.637284  388833 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/client.key ...
	I1210 06:15:06.637303  388833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/client.key: {Name:mk4d41261b8fc725ec99540fa8b493975695bbad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:06.637467  388833 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/apiserver.key.52c83bcc
	I1210 06:15:06.637487  388833 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/apiserver.crt.52c83bcc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1210 06:15:05.962837  389191 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 06:15:05.962855  389191 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 06:15:05.962930  389191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-028500
	I1210 06:15:05.989238  389191 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:15:05.989307  389191 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:15:05.989396  389191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-028500
	I1210 06:15:05.991151  389191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/embed-certs-028500/id_rsa Username:docker}
	I1210 06:15:06.009953  389191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/embed-certs-028500/id_rsa Username:docker}
	I1210 06:15:06.021282  389191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/embed-certs-028500/id_rsa Username:docker}
	I1210 06:15:06.089494  389191 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:15:06.103056  389191 node_ready.go:35] waiting up to 6m0s for node "embed-certs-028500" to be "Ready" ...
	I1210 06:15:06.115500  389191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:15:06.131670  389191 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 06:15:06.131694  389191 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 06:15:06.140776  389191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:15:06.147616  389191 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 06:15:06.147632  389191 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 06:15:06.165502  389191 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 06:15:06.165523  389191 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 06:15:06.184209  389191 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 06:15:06.184227  389191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 06:15:06.201577  389191 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 06:15:06.201643  389191 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 06:15:06.217511  389191 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 06:15:06.217544  389191 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 06:15:06.234339  389191 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 06:15:06.234363  389191 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 06:15:06.248184  389191 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 06:15:06.248201  389191 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 06:15:06.263557  389191 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:15:06.263581  389191 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 06:15:06.277717  389191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:15:06.721124  388833 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/apiserver.crt.52c83bcc ...
	I1210 06:15:06.721199  388833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/apiserver.crt.52c83bcc: {Name:mk9639b0fc481e59a3e06f126f056005d1389ede Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:06.721403  388833 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/apiserver.key.52c83bcc ...
	I1210 06:15:06.721428  388833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/apiserver.key.52c83bcc: {Name:mka4c0d0561f1e5e969c77f8c0ebb53cee7ffff5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:06.721564  388833 certs.go:382] copying /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/apiserver.crt.52c83bcc -> /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/apiserver.crt
	I1210 06:15:06.721689  388833 certs.go:386] copying /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/apiserver.key.52c83bcc -> /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/apiserver.key
	I1210 06:15:06.721803  388833 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/proxy-client.key
	I1210 06:15:06.721830  388833 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/proxy-client.crt with IP's: []
	I1210 06:15:06.763647  388833 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/proxy-client.crt ...
	I1210 06:15:06.763667  388833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/proxy-client.crt: {Name:mk3de019be99c3c707ea83fb17418bc0087f5d69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:06.763788  388833 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/proxy-client.key ...
	I1210 06:15:06.763800  388833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/proxy-client.key: {Name:mk17d65bdaa97fa589b561974987dee32b3e9132 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:06.763980  388833 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253.pem (1338 bytes)
	W1210 06:15:06.764060  388833 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253_empty.pem, impossibly tiny 0 bytes
	I1210 06:15:06.764075  388833 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:15:06.764123  388833 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:15:06.764166  388833 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:15:06.764202  388833 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem (1679 bytes)
	I1210 06:15:06.764269  388833 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:15:06.765174  388833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:15:06.788605  388833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:15:06.807276  388833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:15:06.826209  388833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:15:06.846642  388833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:15:06.866684  388833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 06:15:06.886307  388833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:15:06.906302  388833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/newest-cni-218688/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 06:15:06.928052  388833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:15:06.950444  388833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253.pem --> /usr/share/ca-certificates/9253.pem (1338 bytes)
	I1210 06:15:06.969613  388833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /usr/share/ca-certificates/92532.pem (1708 bytes)
	I1210 06:15:06.988837  388833 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:15:07.003179  388833 ssh_runner.go:195] Run: openssl version
	I1210 06:15:07.010508  388833 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:07.020349  388833 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:15:07.028855  388833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:07.033121  388833 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:07.033171  388833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:07.080981  388833 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:15:07.089960  388833 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 06:15:07.099367  388833 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9253.pem
	I1210 06:15:07.107736  388833 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9253.pem /etc/ssl/certs/9253.pem
	I1210 06:15:07.114947  388833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9253.pem
	I1210 06:15:07.118449  388833 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:37 /usr/share/ca-certificates/9253.pem
	I1210 06:15:07.118510  388833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9253.pem
	I1210 06:15:07.166141  388833 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:15:07.174532  388833 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9253.pem /etc/ssl/certs/51391683.0
	I1210 06:15:07.182247  388833 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/92532.pem
	I1210 06:15:07.189250  388833 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/92532.pem /etc/ssl/certs/92532.pem
	I1210 06:15:07.196368  388833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92532.pem
	I1210 06:15:07.199974  388833 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:37 /usr/share/ca-certificates/92532.pem
	I1210 06:15:07.200024  388833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92532.pem
	I1210 06:15:07.241262  388833 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:15:07.249112  388833 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/92532.pem /etc/ssl/certs/3ec20f2e.0
	I1210 06:15:07.256617  388833 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:15:07.260118  388833 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 06:15:07.260181  388833 kubeadm.go:401] StartCluster: {Name:newest-cni-218688 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-218688 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:15:07.260259  388833 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:15:07.260313  388833 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:15:07.287937  388833 cri.go:89] found id: ""
	I1210 06:15:07.287992  388833 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:15:07.295896  388833 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:15:07.303702  388833 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:15:07.303767  388833 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:15:07.311364  388833 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:15:07.311384  388833 kubeadm.go:158] found existing configuration files:
	
	I1210 06:15:07.311424  388833 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 06:15:07.318748  388833 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:15:07.318798  388833 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:15:07.325926  388833 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 06:15:07.333353  388833 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:15:07.333404  388833 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:15:07.340470  388833 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 06:15:07.349986  388833 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:15:07.350033  388833 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:15:07.358707  388833 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 06:15:07.366333  388833 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:15:07.366379  388833 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:15:07.373654  388833 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:15:07.419061  388833 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 06:15:07.419165  388833 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:15:07.524052  388833 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:15:07.524145  388833 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1210 06:15:07.524230  388833 kubeadm.go:319] OS: Linux
	I1210 06:15:07.524308  388833 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:15:07.524392  388833 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:15:07.524490  388833 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:15:07.524567  388833 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:15:07.524644  388833 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:15:07.524706  388833 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:15:07.524803  388833 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:15:07.524877  388833 kubeadm.go:319] CGROUPS_IO: enabled
	I1210 06:15:07.611468  388833 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:15:07.611646  388833 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:15:07.611773  388833 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:15:07.624931  388833 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:15:07.567619  389191 node_ready.go:49] node "embed-certs-028500" is "Ready"
	I1210 06:15:07.567649  389191 node_ready.go:38] duration metric: took 1.464534118s for node "embed-certs-028500" to be "Ready" ...
	I1210 06:15:07.567665  389191 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:15:07.567719  389191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:15:08.106174  389191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.990640212s)
	I1210 06:15:08.106272  389191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.965474755s)
	I1210 06:15:08.106409  389191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.828647133s)
	I1210 06:15:08.106432  389191 api_server.go:72] duration metric: took 2.18576772s to wait for apiserver process to appear ...
	I1210 06:15:08.106445  389191 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:15:08.106465  389191 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 06:15:08.108237  389191 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-028500 addons enable metrics-server
	
	I1210 06:15:08.112459  389191 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:08.112484  389191 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:08.120156  389191 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1210 06:15:07.627254  388833 out.go:252]   - Generating certificates and keys ...
	I1210 06:15:07.627388  388833 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:15:07.627547  388833 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:15:07.654918  388833 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 06:15:07.685197  388833 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 06:15:07.726616  388833 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 06:15:07.741205  388833 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 06:15:07.937965  388833 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 06:15:07.938182  388833 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-218688] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 06:15:08.083627  388833 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 06:15:08.083799  388833 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-218688] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 06:15:08.117630  388833 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 06:15:08.444067  388833 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 06:15:08.481509  388833 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 06:15:08.481664  388833 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:15:08.629793  388833 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:15:08.784672  388833 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:15:08.912623  388833 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:15:08.982623  388833 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:15:09.041301  388833 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:15:09.042069  388833 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:15:09.046169  388833 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1210 06:15:08.514319  383776 pod_ready.go:104] pod "coredns-7d764666f9-tnm7t" is not "Ready", error: <nil>
	W1210 06:15:11.012522  383776 pod_ready.go:104] pod "coredns-7d764666f9-tnm7t" is not "Ready", error: <nil>
	I1210 06:15:09.047513  388833 out.go:252]   - Booting up control plane ...
	I1210 06:15:09.047629  388833 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:15:09.047771  388833 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:15:09.048682  388833 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:15:09.064568  388833 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:15:09.064719  388833 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:15:09.071416  388833 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:15:09.071705  388833 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:15:09.071762  388833 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:15:09.174032  388833 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:15:09.174190  388833 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:15:09.675873  388833 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.820309ms
	I1210 06:15:09.680105  388833 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 06:15:09.680202  388833 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1210 06:15:09.680279  388833 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 06:15:09.680358  388833 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 06:15:10.685143  388833 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004976063s
	I1210 06:15:08.121020  389191 addons.go:530] duration metric: took 2.20001005s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1210 06:15:08.607308  389191 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 06:15:08.612418  389191 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:08.612444  389191 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:09.106971  389191 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 06:15:09.111215  389191 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1210 06:15:09.112278  389191 api_server.go:141] control plane version: v1.34.3
	I1210 06:15:09.112300  389191 api_server.go:131] duration metric: took 1.005849343s to wait for apiserver health ...
	I1210 06:15:09.112309  389191 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:15:09.115678  389191 system_pods.go:59] 8 kube-system pods found
	I1210 06:15:09.115712  389191 system_pods.go:61] "coredns-66bc5c9577-8xwfc" [7ad22b4a-5d1a-403a-a57e-69745116eb0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:15:09.115728  389191 system_pods.go:61] "etcd-embed-certs-028500" [f56da20c-a457-4f29-98f3-3b29ea6fcbf3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:15:09.115744  389191 system_pods.go:61] "kindnet-6gq2z" [cce0711c-ff56-4335-b244-17f0180eb4d4] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1210 06:15:09.115750  389191 system_pods.go:61] "kube-apiserver-embed-certs-028500" [3965275f-b1f9-4996-99e7-c070bdfa875d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:15:09.115759  389191 system_pods.go:61] "kube-controller-manager-embed-certs-028500" [c513486a-c2d7-4083-acf4-075177467d76] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:15:09.115774  389191 system_pods.go:61] "kube-proxy-sr7kh" [0b34d810-7015-47ad-98a2-41d80c02a77e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 06:15:09.115782  389191 system_pods.go:61] "kube-scheduler-embed-certs-028500" [0a991394-8849-4863-9251-0f883f13c49a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:15:09.115787  389191 system_pods.go:61] "storage-provisioner" [c6fe10b9-7d0d-4911-afc6-65b935770c41] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:15:09.115795  389191 system_pods.go:74] duration metric: took 3.481249ms to wait for pod list to return data ...
	I1210 06:15:09.115804  389191 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:15:09.118042  389191 default_sa.go:45] found service account: "default"
	I1210 06:15:09.118060  389191 default_sa.go:55] duration metric: took 2.250472ms for default service account to be created ...
	I1210 06:15:09.118068  389191 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 06:15:09.120540  389191 system_pods.go:86] 8 kube-system pods found
	I1210 06:15:09.120571  389191 system_pods.go:89] "coredns-66bc5c9577-8xwfc" [7ad22b4a-5d1a-403a-a57e-69745116eb0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:15:09.120582  389191 system_pods.go:89] "etcd-embed-certs-028500" [f56da20c-a457-4f29-98f3-3b29ea6fcbf3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:15:09.120592  389191 system_pods.go:89] "kindnet-6gq2z" [cce0711c-ff56-4335-b244-17f0180eb4d4] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1210 06:15:09.120607  389191 system_pods.go:89] "kube-apiserver-embed-certs-028500" [3965275f-b1f9-4996-99e7-c070bdfa875d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:15:09.120615  389191 system_pods.go:89] "kube-controller-manager-embed-certs-028500" [c513486a-c2d7-4083-acf4-075177467d76] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:15:09.120622  389191 system_pods.go:89] "kube-proxy-sr7kh" [0b34d810-7015-47ad-98a2-41d80c02a77e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 06:15:09.120628  389191 system_pods.go:89] "kube-scheduler-embed-certs-028500" [0a991394-8849-4863-9251-0f883f13c49a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:15:09.120638  389191 system_pods.go:89] "storage-provisioner" [c6fe10b9-7d0d-4911-afc6-65b935770c41] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:15:09.120649  389191 system_pods.go:126] duration metric: took 2.574435ms to wait for k8s-apps to be running ...
	I1210 06:15:09.120661  389191 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 06:15:09.120706  389191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:15:09.132328  389191 system_svc.go:56] duration metric: took 11.665031ms WaitForService to wait for kubelet
	I1210 06:15:09.132347  389191 kubeadm.go:587] duration metric: took 3.211683874s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:15:09.132362  389191 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:15:09.134657  389191 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:15:09.134675  389191 node_conditions.go:123] node cpu capacity is 8
	I1210 06:15:09.134692  389191 node_conditions.go:105] duration metric: took 2.324204ms to run NodePressure ...
	I1210 06:15:09.134708  389191 start.go:242] waiting for startup goroutines ...
	I1210 06:15:09.134719  389191 start.go:247] waiting for cluster config update ...
	I1210 06:15:09.134732  389191 start.go:256] writing updated cluster config ...
	I1210 06:15:09.134972  389191 ssh_runner.go:195] Run: rm -f paused
	I1210 06:15:09.138326  389191 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:15:09.141234  389191 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8xwfc" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 06:15:11.146914  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	I1210 06:15:11.859825  388833 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.179606477s
	I1210 06:15:13.682511  388833 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002397694s
	I1210 06:15:13.702667  388833 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 06:15:13.713410  388833 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 06:15:13.721957  388833 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 06:15:13.722239  388833 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-218688 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 06:15:13.730608  388833 kubeadm.go:319] [bootstrap-token] Using token: p04ebg.bb0bv44e5xs1djbe
	I1210 06:15:13.731751  388833 out.go:252]   - Configuring RBAC rules ...
	I1210 06:15:13.731895  388833 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 06:15:13.735338  388833 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 06:15:13.740170  388833 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 06:15:13.742439  388833 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 06:15:13.745397  388833 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 06:15:13.748676  388833 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 06:15:14.089766  388833 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 06:15:14.511942  388833 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 06:15:15.089616  388833 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 06:15:15.090833  388833 kubeadm.go:319] 
	I1210 06:15:15.090923  388833 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 06:15:15.090930  388833 kubeadm.go:319] 
	I1210 06:15:15.091104  388833 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 06:15:15.091112  388833 kubeadm.go:319] 
	I1210 06:15:15.091141  388833 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 06:15:15.091215  388833 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 06:15:15.091278  388833 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 06:15:15.091283  388833 kubeadm.go:319] 
	I1210 06:15:15.091349  388833 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 06:15:15.091355  388833 kubeadm.go:319] 
	I1210 06:15:15.091414  388833 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 06:15:15.091420  388833 kubeadm.go:319] 
	I1210 06:15:15.091525  388833 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 06:15:15.091636  388833 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 06:15:15.091727  388833 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 06:15:15.091732  388833 kubeadm.go:319] 
	I1210 06:15:15.091831  388833 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 06:15:15.091920  388833 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 06:15:15.091925  388833 kubeadm.go:319] 
	I1210 06:15:15.092023  388833 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token p04ebg.bb0bv44e5xs1djbe \
	I1210 06:15:15.092172  388833 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fec42d2a7c02894c4f889fb8bc31e98283f3b1a3e3609cf9160b0c24109717cc \
	I1210 06:15:15.092218  388833 kubeadm.go:319] 	--control-plane 
	I1210 06:15:15.092233  388833 kubeadm.go:319] 
	I1210 06:15:15.092343  388833 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 06:15:15.092359  388833 kubeadm.go:319] 
	I1210 06:15:15.092471  388833 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token p04ebg.bb0bv44e5xs1djbe \
	I1210 06:15:15.092592  388833 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fec42d2a7c02894c4f889fb8bc31e98283f3b1a3e3609cf9160b0c24109717cc 
	I1210 06:15:15.096387  388833 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1210 06:15:15.096609  388833 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:15:15.096643  388833 cni.go:84] Creating CNI manager for ""
	I1210 06:15:15.096655  388833 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:15:15.098252  388833 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1210 06:15:13.015638  383776 pod_ready.go:104] pod "coredns-7d764666f9-tnm7t" is not "Ready", error: <nil>
	W1210 06:15:15.516414  383776 pod_ready.go:104] pod "coredns-7d764666f9-tnm7t" is not "Ready", error: <nil>
	I1210 06:15:15.099329  388833 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 06:15:15.105118  388833 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl ...
	I1210 06:15:15.105136  388833 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1210 06:15:15.121636  388833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 06:15:15.436280  388833 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 06:15:15.436466  388833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:15:15.436617  388833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-218688 minikube.k8s.io/updated_at=2025_12_10T06_15_15_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602 minikube.k8s.io/name=newest-cni-218688 minikube.k8s.io/primary=true
	I1210 06:15:15.533764  388833 ops.go:34] apiserver oom_adj: -16
	I1210 06:15:15.533779  388833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:15:16.034038  388833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:15:16.534754  388833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1210 06:15:13.148547  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	W1210 06:15:15.149848  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	I1210 06:15:17.034065  388833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:15:17.533803  388833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:15:18.034519  388833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:15:18.537239  388833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:15:19.034826  388833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:15:19.534654  388833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:15:19.598910  388833 kubeadm.go:1114] duration metric: took 4.162497289s to wait for elevateKubeSystemPrivileges
	I1210 06:15:19.598946  388833 kubeadm.go:403] duration metric: took 12.338768334s to StartCluster
	I1210 06:15:19.598968  388833 settings.go:142] acquiring lock: {Name:mk8c38e27b37253ca8cb2a2adf6342f0db270902 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:19.599036  388833 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:15:19.600790  388833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/kubeconfig: {Name:mkfa60e97179780abb80cddf99075aed36301884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:19.601030  388833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 06:15:19.601036  388833 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:15:19.601117  388833 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:15:19.601219  388833 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-218688"
	I1210 06:15:19.601236  388833 addons.go:70] Setting default-storageclass=true in profile "newest-cni-218688"
	I1210 06:15:19.601268  388833 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-218688"
	I1210 06:15:19.601277  388833 config.go:182] Loaded profile config "newest-cni-218688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:15:19.601242  388833 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-218688"
	I1210 06:15:19.601391  388833 host.go:66] Checking if "newest-cni-218688" exists ...
	I1210 06:15:19.601705  388833 cli_runner.go:164] Run: docker container inspect newest-cni-218688 --format={{.State.Status}}
	I1210 06:15:19.601874  388833 cli_runner.go:164] Run: docker container inspect newest-cni-218688 --format={{.State.Status}}
	I1210 06:15:19.602570  388833 out.go:179] * Verifying Kubernetes components...
	I1210 06:15:19.603549  388833 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:15:19.625410  388833 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:15:19.626023  388833 addons.go:239] Setting addon default-storageclass=true in "newest-cni-218688"
	I1210 06:15:19.626058  388833 host.go:66] Checking if "newest-cni-218688" exists ...
	I1210 06:15:19.626406  388833 cli_runner.go:164] Run: docker container inspect newest-cni-218688 --format={{.State.Status}}
	I1210 06:15:19.626768  388833 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:15:19.626789  388833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:15:19.626851  388833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:19.654825  388833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:19.655610  388833 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:15:19.655631  388833 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:15:19.655685  388833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:19.677566  388833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:19.694759  388833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 06:15:19.747806  388833 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:15:19.762777  388833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:15:19.791188  388833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:15:19.891126  388833 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1210 06:15:19.892895  388833 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:15:19.892954  388833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:15:20.058046  388833 api_server.go:72] duration metric: took 456.976674ms to wait for apiserver process to appear ...
	I1210 06:15:20.058075  388833 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:15:20.058118  388833 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:20.062177  388833 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1210 06:15:20.062921  388833 api_server.go:141] control plane version: v1.35.0-rc.1
	I1210 06:15:20.062943  388833 api_server.go:131] duration metric: took 4.842782ms to wait for apiserver health ...
	I1210 06:15:20.062952  388833 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:15:20.063927  388833 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1210 06:15:20.065128  388833 addons.go:530] duration metric: took 464.017189ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1210 06:15:20.065326  388833 system_pods.go:59] 8 kube-system pods found
	I1210 06:15:20.065375  388833 system_pods.go:61] "coredns-7d764666f9-44pd7" [59f9ee36-231a-4116-a88e-60d48b054690] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1210 06:15:20.065385  388833 system_pods.go:61] "etcd-newest-cni-218688" [c27a2601-2917-44f3-966c-b554d5b92c02] Running
	I1210 06:15:20.065394  388833 system_pods.go:61] "kindnet-n75st" [33becf6b-71b4-4682-81bc-c41d280389e3] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1210 06:15:20.065407  388833 system_pods.go:61] "kube-apiserver-newest-cni-218688" [a423257c-9365-4560-865a-9de59f0aafeb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:15:20.065414  388833 system_pods.go:61] "kube-controller-manager-newest-cni-218688" [5a19eab1-194c-4d33-9aa6-5cce8ba87a10] Running
	I1210 06:15:20.065420  388833 system_pods.go:61] "kube-proxy-tlj9s" [3ff684af-caff-4db8-991a-8ba99fe5f326] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 06:15:20.065427  388833 system_pods.go:61] "kube-scheduler-newest-cni-218688" [8063cc2c-8c98-4490-94af-1613e4881229] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:15:20.065432  388833 system_pods.go:61] "storage-provisioner" [a10bfb27-694c-4654-a067-8f36fe743de7] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1210 06:15:20.065440  388833 system_pods.go:74] duration metric: took 2.483656ms to wait for pod list to return data ...
	I1210 06:15:20.065446  388833 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:15:20.067325  388833 default_sa.go:45] found service account: "default"
	I1210 06:15:20.067342  388833 default_sa.go:55] duration metric: took 1.88952ms for default service account to be created ...
	I1210 06:15:20.067354  388833 kubeadm.go:587] duration metric: took 466.290943ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 06:15:20.067373  388833 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:15:20.069480  388833 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:15:20.069506  388833 node_conditions.go:123] node cpu capacity is 8
	I1210 06:15:20.069522  388833 node_conditions.go:105] duration metric: took 2.142641ms to run NodePressure ...
	I1210 06:15:20.069539  388833 start.go:242] waiting for startup goroutines ...
	I1210 06:15:20.396072  388833 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-218688" context rescaled to 1 replicas
	I1210 06:15:20.396127  388833 start.go:247] waiting for cluster config update ...
	I1210 06:15:20.396142  388833 start.go:256] writing updated cluster config ...
	I1210 06:15:20.396436  388833 ssh_runner.go:195] Run: rm -f paused
	I1210 06:15:20.449270  388833 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-rc.1 (minor skew: 1)
	I1210 06:15:20.450822  388833 out.go:179] * Done! kubectl is now configured to use "newest-cni-218688" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 06:15:20 newest-cni-218688 crio[769]: time="2025-12-10T06:15:20.147104336Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:20 newest-cni-218688 crio[769]: time="2025-12-10T06:15:20.150316785Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=02460718-5947-4fd1-a555-01796d98cbe2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:15:20 newest-cni-218688 crio[769]: time="2025-12-10T06:15:20.150919372Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=07945229-f781-4e4f-a814-c51e88e2416b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:15:20 newest-cni-218688 crio[769]: time="2025-12-10T06:15:20.151736896Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 06:15:20 newest-cni-218688 crio[769]: time="2025-12-10T06:15:20.152448449Z" level=info msg="Ran pod sandbox ff2fdaaf5911237e247ee64611a5ad317ade24e63e0b0aeaa1142633e74cd2eb with infra container: kube-system/kindnet-n75st/POD" id=02460718-5947-4fd1-a555-01796d98cbe2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:15:20 newest-cni-218688 crio[769]: time="2025-12-10T06:15:20.152536287Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 06:15:20 newest-cni-218688 crio[769]: time="2025-12-10T06:15:20.153832684Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=caad15eb-3728-41b9-9767-ec41f87fc468 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:15:20 newest-cni-218688 crio[769]: time="2025-12-10T06:15:20.153915508Z" level=info msg="Ran pod sandbox 999723f466fe250f6308a721b3f17adbfc01ca3b8573a438f14396bf7b7490a3 with infra container: kube-system/kube-proxy-tlj9s/POD" id=07945229-f781-4e4f-a814-c51e88e2416b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:15:20 newest-cni-218688 crio[769]: time="2025-12-10T06:15:20.155454306Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=5fd497ea-57cf-4695-9fe2-e2e0bfdcfb8d name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:15:20 newest-cni-218688 crio[769]: time="2025-12-10T06:15:20.155621466Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=4d50fe82-e884-4b3a-b96f-efd2bbb1494c name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:15:20 newest-cni-218688 crio[769]: time="2025-12-10T06:15:20.156426909Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=0a59ae33-fc0e-4dec-9e43-cd00da00a3c5 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:15:20 newest-cni-218688 crio[769]: time="2025-12-10T06:15:20.158874855Z" level=info msg="Creating container: kube-system/kindnet-n75st/kindnet-cni" id=cc1f61bf-ed33-4bf8-8fae-3f406904a14b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:15:20 newest-cni-218688 crio[769]: time="2025-12-10T06:15:20.158958147Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:20 newest-cni-218688 crio[769]: time="2025-12-10T06:15:20.160164337Z" level=info msg="Creating container: kube-system/kube-proxy-tlj9s/kube-proxy" id=51c89bf6-a5ec-41b2-8293-3c2644a919bf name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:15:20 newest-cni-218688 crio[769]: time="2025-12-10T06:15:20.160277946Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:20 newest-cni-218688 crio[769]: time="2025-12-10T06:15:20.163348118Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:20 newest-cni-218688 crio[769]: time="2025-12-10T06:15:20.163741Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:20 newest-cni-218688 crio[769]: time="2025-12-10T06:15:20.1653875Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:20 newest-cni-218688 crio[769]: time="2025-12-10T06:15:20.165860608Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:20 newest-cni-218688 crio[769]: time="2025-12-10T06:15:20.190444002Z" level=info msg="Created container 9b54e32674ec48823b24a53a8472d5bb491608af324e5c64985a619f9dcc5e3f: kube-system/kindnet-n75st/kindnet-cni" id=cc1f61bf-ed33-4bf8-8fae-3f406904a14b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:15:20 newest-cni-218688 crio[769]: time="2025-12-10T06:15:20.191019427Z" level=info msg="Starting container: 9b54e32674ec48823b24a53a8472d5bb491608af324e5c64985a619f9dcc5e3f" id=106cc91a-4146-4e8f-9614-768043c96614 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:15:20 newest-cni-218688 crio[769]: time="2025-12-10T06:15:20.192776241Z" level=info msg="Started container" PID=1569 containerID=9b54e32674ec48823b24a53a8472d5bb491608af324e5c64985a619f9dcc5e3f description=kube-system/kindnet-n75st/kindnet-cni id=106cc91a-4146-4e8f-9614-768043c96614 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ff2fdaaf5911237e247ee64611a5ad317ade24e63e0b0aeaa1142633e74cd2eb
	Dec 10 06:15:20 newest-cni-218688 crio[769]: time="2025-12-10T06:15:20.194549628Z" level=info msg="Created container 14c442bdae0554908ad3f85ca830434be8e8ba5178c8925b025f6f31467b6b6a: kube-system/kube-proxy-tlj9s/kube-proxy" id=51c89bf6-a5ec-41b2-8293-3c2644a919bf name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:15:20 newest-cni-218688 crio[769]: time="2025-12-10T06:15:20.195089254Z" level=info msg="Starting container: 14c442bdae0554908ad3f85ca830434be8e8ba5178c8925b025f6f31467b6b6a" id=d5f6a722-83ec-4963-ab41-7571c1b587d2 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:15:20 newest-cni-218688 crio[769]: time="2025-12-10T06:15:20.197908653Z" level=info msg="Started container" PID=1570 containerID=14c442bdae0554908ad3f85ca830434be8e8ba5178c8925b025f6f31467b6b6a description=kube-system/kube-proxy-tlj9s/kube-proxy id=d5f6a722-83ec-4963-ab41-7571c1b587d2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=999723f466fe250f6308a721b3f17adbfc01ca3b8573a438f14396bf7b7490a3
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	14c442bdae055       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a   1 second ago        Running             kube-proxy                0                   999723f466fe2       kube-proxy-tlj9s                            kube-system
	9b54e32674ec4       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   1 second ago        Running             kindnet-cni               0                   ff2fdaaf59112       kindnet-n75st                               kube-system
	73aa8706696dd       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc   11 seconds ago      Running             kube-scheduler            0                   da687352614d9       kube-scheduler-newest-cni-218688            kube-system
	d82b050de2d81       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614   11 seconds ago      Running             kube-controller-manager   0                   0ec53daac3d2b       kube-controller-manager-newest-cni-218688   kube-system
	38794ab5a91c2       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce   11 seconds ago      Running             kube-apiserver            0                   0346d9025f643       kube-apiserver-newest-cni-218688            kube-system
	0c5bf9d1b90b8       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2   11 seconds ago      Running             etcd                      0                   c8677a57c56ae       etcd-newest-cni-218688                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-218688
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-218688
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602
	                    minikube.k8s.io/name=newest-cni-218688
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_15_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:15:11 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-218688
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:15:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:15:14 +0000   Wed, 10 Dec 2025 06:15:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:15:14 +0000   Wed, 10 Dec 2025 06:15:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:15:14 +0000   Wed, 10 Dec 2025 06:15:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 10 Dec 2025 06:15:14 +0000   Wed, 10 Dec 2025 06:15:10 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-218688
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                aad1f9dc-7291-4e51-a2e1-9457e223453b
	  Boot ID:                    b1b789e7-29ca-41f0-9541-8c4ef16372aa
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-218688                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7s
	  kube-system                 kindnet-n75st                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-218688             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-controller-manager-newest-cni-218688    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-proxy-tlj9s                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-218688             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-218688 event: Registered Node newest-cni-218688 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e ac 6a 3a 10 14 08 06
	[  +0.000389] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e1 45 1e 59 dc 08 06
	[ +12.231886] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff aa b6 c3 b5 b8 e1 08 06
	[  +0.018522] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff f6 5b 96 ba 91 6c 08 06
	[Dec10 06:13] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 91 ba 36 9a ae 08 06
	[  +0.002987] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 7f a1 c5 f7 73 08 06
	[  +1.205570] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 04 b2 ab d7 49 08 06
	[  +4.623767] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 10 2d 23 5f e6 08 06
	[  +0.000315] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 5b 96 ba 91 6c 08 06
	[ +12.537493] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 fa d0 2a 46 66 08 06
	[  +0.000395] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 16 04 b2 ab d7 49 08 06
	[ +31.413502] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 1b 61 8f e3 57 08 06
	[  +0.000352] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fa 91 ba 36 9a ae 08 06
	
	
	==> etcd [0c5bf9d1b90b8e177e8fb98b8ca863559bbfbaf4bd042d1af3f8153c3c91cee8] <==
	{"level":"info","ts":"2025-12-10T06:15:09.995137Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-10T06:15:10.888201Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-10T06:15:10.888304Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-10T06:15:10.888357Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-12-10T06:15:10.888373Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-10T06:15:10.888402Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-12-10T06:15:10.889182Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-10T06:15:10.889231Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-10T06:15:10.889254Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-12-10T06:15:10.889265Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-10T06:15:10.890184Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-218688 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-10T06:15:10.890195Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-10T06:15:10.890402Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-10T06:15:10.890425Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-10T06:15:10.890222Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-10T06:15:10.890210Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-10T06:15:10.891216Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-10T06:15:10.891302Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-10T06:15:10.891344Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-10T06:15:10.891373Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-10T06:15:10.891488Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-10T06:15:10.891691Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-10T06:15:10.891707Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-10T06:15:10.898126Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-10T06:15:10.898815Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 06:15:21 up 57 min,  0 user,  load average: 6.06, 4.77, 2.99
	Linux newest-cni-218688 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9b54e32674ec48823b24a53a8472d5bb491608af324e5c64985a619f9dcc5e3f] <==
	I1210 06:15:20.412546       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 06:15:20.412795       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1210 06:15:20.412964       1 main.go:148] setting mtu 1500 for CNI 
	I1210 06:15:20.412981       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 06:15:20.413010       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T06:15:20Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 06:15:20.614220       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 06:15:20.614274       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 06:15:20.614288       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 06:15:20.614769       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 06:15:20.992448       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 06:15:20.992484       1 metrics.go:72] Registering metrics
	I1210 06:15:20.992588       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [38794ab5a91c2075a947a9bc7d9f4f829aeb128e4b6a3fee6e510c38d29a33fd] <==
	I1210 06:15:11.880696       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 06:15:11.884238       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1210 06:15:11.884442       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:15:11.884469       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1210 06:15:11.884480       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1210 06:15:11.889220       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:15:11.893241       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:11.917202       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:15:12.786377       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1210 06:15:12.790055       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1210 06:15:12.790074       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1210 06:15:13.300441       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:15:13.338500       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:15:13.388201       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1210 06:15:13.398889       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1210 06:15:13.400007       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 06:15:13.404057       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:15:13.819672       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 06:15:14.502571       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 06:15:14.510898       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1210 06:15:14.519250       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1210 06:15:19.671846       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 06:15:19.722745       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:15:19.735446       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:15:19.821677       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [d82b050de2d8145eb851f23418e80739097b3aa66a6bd1dae87e71822dd110ad] <==
	I1210 06:15:18.624278       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:18.624334       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1210 06:15:18.624410       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-218688"
	I1210 06:15:18.624428       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:18.624460       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:18.624473       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1210 06:15:18.624489       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:18.624491       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:18.624512       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:18.624563       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:18.624628       1 range_allocator.go:177] "Sending events to api server"
	I1210 06:15:18.624717       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1210 06:15:18.624744       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:18.624744       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:18.624747       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:15:18.624892       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:18.624753       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:18.624856       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:18.629814       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:15:18.629950       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:18.633267       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-218688" podCIDRs=["10.42.0.0/24"]
	I1210 06:15:18.723816       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:18.723835       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1210 06:15:18.723839       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1210 06:15:18.730067       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [14c442bdae0554908ad3f85ca830434be8e8ba5178c8925b025f6f31467b6b6a] <==
	I1210 06:15:20.232259       1 server_linux.go:53] "Using iptables proxy"
	I1210 06:15:20.310963       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:15:20.411217       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:20.411252       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1210 06:15:20.411342       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:15:20.431133       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:15:20.431182       1 server_linux.go:136] "Using iptables Proxier"
	I1210 06:15:20.436836       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:15:20.437242       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1210 06:15:20.437259       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:15:20.438548       1 config.go:200] "Starting service config controller"
	I1210 06:15:20.438576       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:15:20.438664       1 config.go:309] "Starting node config controller"
	I1210 06:15:20.438687       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:15:20.438697       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:15:20.438762       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:15:20.438789       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:15:20.438864       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:15:20.438892       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:15:20.539035       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 06:15:20.539040       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 06:15:20.539066       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [73aa8706696ddeed4f172682b14078df8c74c588bb7682d8e0bf15cca73d4023] <==
	E1210 06:15:11.859252       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1210 06:15:11.859247       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1210 06:15:11.861032       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1210 06:15:11.861217       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1210 06:15:11.861394       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1210 06:15:11.861651       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1210 06:15:11.861783       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1210 06:15:11.861794       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1210 06:15:11.862056       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1210 06:15:11.862058       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1210 06:15:11.863265       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1210 06:15:12.706356       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1210 06:15:12.709354       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1210 06:15:12.758049       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1210 06:15:12.782589       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1210 06:15:12.797694       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1210 06:15:12.823794       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1210 06:15:12.825767       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1210 06:15:12.826904       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1210 06:15:12.871881       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1210 06:15:12.926171       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1210 06:15:13.048333       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1210 06:15:13.108915       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1210 06:15:13.121938       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	I1210 06:15:15.352880       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 10 06:15:15 newest-cni-218688 kubelet[1291]: I1210 06:15:15.428771    1291 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-218688" podStartSLOduration=1.428753938 podStartE2EDuration="1.428753938s" podCreationTimestamp="2025-12-10 06:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:15:15.410932386 +0000 UTC m=+1.163986598" watchObservedRunningTime="2025-12-10 06:15:15.428753938 +0000 UTC m=+1.181808135"
	Dec 10 06:15:15 newest-cni-218688 kubelet[1291]: I1210 06:15:15.429216    1291 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-218688" podStartSLOduration=1.429202797 podStartE2EDuration="1.429202797s" podCreationTimestamp="2025-12-10 06:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:15:15.428877321 +0000 UTC m=+1.181931520" watchObservedRunningTime="2025-12-10 06:15:15.429202797 +0000 UTC m=+1.182256993"
	Dec 10 06:15:15 newest-cni-218688 kubelet[1291]: I1210 06:15:15.441313    1291 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-218688" podStartSLOduration=1.4412909790000001 podStartE2EDuration="1.441290979s" podCreationTimestamp="2025-12-10 06:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:15:15.439719169 +0000 UTC m=+1.192773366" watchObservedRunningTime="2025-12-10 06:15:15.441290979 +0000 UTC m=+1.194345176"
	Dec 10 06:15:15 newest-cni-218688 kubelet[1291]: I1210 06:15:15.456170    1291 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-218688" podStartSLOduration=1.456150289 podStartE2EDuration="1.456150289s" podCreationTimestamp="2025-12-10 06:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:15:15.456006666 +0000 UTC m=+1.209060863" watchObservedRunningTime="2025-12-10 06:15:15.456150289 +0000 UTC m=+1.209204486"
	Dec 10 06:15:16 newest-cni-218688 kubelet[1291]: E1210 06:15:16.376272    1291 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-218688" containerName="kube-scheduler"
	Dec 10 06:15:16 newest-cni-218688 kubelet[1291]: E1210 06:15:16.376478    1291 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-218688" containerName="kube-apiserver"
	Dec 10 06:15:16 newest-cni-218688 kubelet[1291]: E1210 06:15:16.376598    1291 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-218688" containerName="kube-controller-manager"
	Dec 10 06:15:16 newest-cni-218688 kubelet[1291]: E1210 06:15:16.376729    1291 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-218688" containerName="etcd"
	Dec 10 06:15:17 newest-cni-218688 kubelet[1291]: E1210 06:15:17.377774    1291 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-218688" containerName="kube-scheduler"
	Dec 10 06:15:17 newest-cni-218688 kubelet[1291]: E1210 06:15:17.377933    1291 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-218688" containerName="etcd"
	Dec 10 06:15:18 newest-cni-218688 kubelet[1291]: E1210 06:15:18.379462    1291 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-218688" containerName="kube-scheduler"
	Dec 10 06:15:18 newest-cni-218688 kubelet[1291]: I1210 06:15:18.700010    1291 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 10 06:15:18 newest-cni-218688 kubelet[1291]: I1210 06:15:18.700655    1291 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 10 06:15:19 newest-cni-218688 kubelet[1291]: E1210 06:15:19.391334    1291 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-218688" containerName="kube-apiserver"
	Dec 10 06:15:19 newest-cni-218688 kubelet[1291]: I1210 06:15:19.874159    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33becf6b-71b4-4682-81bc-c41d280389e3-lib-modules\") pod \"kindnet-n75st\" (UID: \"33becf6b-71b4-4682-81bc-c41d280389e3\") " pod="kube-system/kindnet-n75st"
	Dec 10 06:15:19 newest-cni-218688 kubelet[1291]: I1210 06:15:19.874210    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ghpp\" (UniqueName: \"kubernetes.io/projected/3ff684af-caff-4db8-991a-8ba99fe5f326-kube-api-access-6ghpp\") pod \"kube-proxy-tlj9s\" (UID: \"3ff684af-caff-4db8-991a-8ba99fe5f326\") " pod="kube-system/kube-proxy-tlj9s"
	Dec 10 06:15:19 newest-cni-218688 kubelet[1291]: I1210 06:15:19.874242    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33becf6b-71b4-4682-81bc-c41d280389e3-xtables-lock\") pod \"kindnet-n75st\" (UID: \"33becf6b-71b4-4682-81bc-c41d280389e3\") " pod="kube-system/kindnet-n75st"
	Dec 10 06:15:19 newest-cni-218688 kubelet[1291]: I1210 06:15:19.874303    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ff684af-caff-4db8-991a-8ba99fe5f326-xtables-lock\") pod \"kube-proxy-tlj9s\" (UID: \"3ff684af-caff-4db8-991a-8ba99fe5f326\") " pod="kube-system/kube-proxy-tlj9s"
	Dec 10 06:15:19 newest-cni-218688 kubelet[1291]: I1210 06:15:19.874331    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ff684af-caff-4db8-991a-8ba99fe5f326-lib-modules\") pod \"kube-proxy-tlj9s\" (UID: \"3ff684af-caff-4db8-991a-8ba99fe5f326\") " pod="kube-system/kube-proxy-tlj9s"
	Dec 10 06:15:19 newest-cni-218688 kubelet[1291]: I1210 06:15:19.874358    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qff7g\" (UniqueName: \"kubernetes.io/projected/33becf6b-71b4-4682-81bc-c41d280389e3-kube-api-access-qff7g\") pod \"kindnet-n75st\" (UID: \"33becf6b-71b4-4682-81bc-c41d280389e3\") " pod="kube-system/kindnet-n75st"
	Dec 10 06:15:19 newest-cni-218688 kubelet[1291]: I1210 06:15:19.874377    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3ff684af-caff-4db8-991a-8ba99fe5f326-kube-proxy\") pod \"kube-proxy-tlj9s\" (UID: \"3ff684af-caff-4db8-991a-8ba99fe5f326\") " pod="kube-system/kube-proxy-tlj9s"
	Dec 10 06:15:19 newest-cni-218688 kubelet[1291]: I1210 06:15:19.874400    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/33becf6b-71b4-4682-81bc-c41d280389e3-cni-cfg\") pod \"kindnet-n75st\" (UID: \"33becf6b-71b4-4682-81bc-c41d280389e3\") " pod="kube-system/kindnet-n75st"
	Dec 10 06:15:20 newest-cni-218688 kubelet[1291]: I1210 06:15:20.399597    1291 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-tlj9s" podStartSLOduration=1.39957915 podStartE2EDuration="1.39957915s" podCreationTimestamp="2025-12-10 06:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:15:20.399469612 +0000 UTC m=+6.152523809" watchObservedRunningTime="2025-12-10 06:15:20.39957915 +0000 UTC m=+6.152633350"
	Dec 10 06:15:20 newest-cni-218688 kubelet[1291]: I1210 06:15:20.410505    1291 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-n75st" podStartSLOduration=1.410491005 podStartE2EDuration="1.410491005s" podCreationTimestamp="2025-12-10 06:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:15:20.410342316 +0000 UTC m=+6.163396524" watchObservedRunningTime="2025-12-10 06:15:20.410491005 +0000 UTC m=+6.163545202"
	Dec 10 06:15:21 newest-cni-218688 kubelet[1291]: E1210 06:15:21.364660    1291 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-218688" containerName="kube-controller-manager"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-218688 -n newest-cni-218688
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-218688 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-44pd7 storage-provisioner
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-218688 describe pod coredns-7d764666f9-44pd7 storage-provisioner
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-218688 describe pod coredns-7d764666f9-44pd7 storage-provisioner: exit status 1 (56.670901ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-44pd7" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-218688 describe pod coredns-7d764666f9-44pd7 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-218688 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-218688 --alsologtostderr -v=1: exit status 80 (2.114014092s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-218688 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:15:36.829192  399954 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:15:36.829450  399954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:15:36.829461  399954 out.go:374] Setting ErrFile to fd 2...
	I1210 06:15:36.829468  399954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:15:36.829749  399954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 06:15:36.830047  399954 out.go:368] Setting JSON to false
	I1210 06:15:36.830067  399954 mustload.go:66] Loading cluster: newest-cni-218688
	I1210 06:15:36.830611  399954 config.go:182] Loaded profile config "newest-cni-218688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:15:36.831175  399954 cli_runner.go:164] Run: docker container inspect newest-cni-218688 --format={{.State.Status}}
	I1210 06:15:36.848925  399954 host.go:66] Checking if "newest-cni-218688" exists ...
	I1210 06:15:36.849189  399954 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:15:36.903262  399954 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-10 06:15:36.893605189 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:15:36.903989  399954 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765151505-21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765151505-21409-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-218688 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1210 06:15:36.906700  399954 out.go:179] * Pausing node newest-cni-218688 ... 
	I1210 06:15:36.907696  399954 host.go:66] Checking if "newest-cni-218688" exists ...
	I1210 06:15:36.907965  399954 ssh_runner.go:195] Run: systemctl --version
	I1210 06:15:36.908019  399954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:36.925500  399954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:37.020561  399954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:15:37.032335  399954 pause.go:52] kubelet running: true
	I1210 06:15:37.032411  399954 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:15:37.160652  399954 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:15:37.160753  399954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:15:37.237132  399954 cri.go:89] found id: "ca0dbe21353818bffb1a564f8fa31f330d4f4bf2e79e1f937f38d09a263b6de9"
	I1210 06:15:37.237163  399954 cri.go:89] found id: "f953c9c411dc66a0e299a0159b88b2797ec26bef99175c950d918713f0b5913c"
	I1210 06:15:37.237170  399954 cri.go:89] found id: "7e4d8e81695d5a307da305b92da855df1d3e4b373020a9cfdf7229f0fef82b24"
	I1210 06:15:37.237199  399954 cri.go:89] found id: "c5cec4543cf4d1934430ad2ce36ea404e30e808cc792d3f8c229bac5e073805b"
	I1210 06:15:37.237205  399954 cri.go:89] found id: "5670b137f9a4dbce31099780cdcda6f57ff0d8aaec66f5248f7e42b1d17ecb78"
	I1210 06:15:37.237211  399954 cri.go:89] found id: "e257321780848d3ffe855909a09763b99823d2b96edae7e378a5f63893b142e0"
	I1210 06:15:37.237214  399954 cri.go:89] found id: ""
	I1210 06:15:37.237256  399954 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:15:37.249346  399954 retry.go:31] will retry after 157.634465ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:15:37Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:15:37.407783  399954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:15:37.420489  399954 pause.go:52] kubelet running: false
	I1210 06:15:37.420553  399954 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:15:37.535114  399954 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:15:37.535177  399954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:15:37.597210  399954 cri.go:89] found id: "ca0dbe21353818bffb1a564f8fa31f330d4f4bf2e79e1f937f38d09a263b6de9"
	I1210 06:15:37.597231  399954 cri.go:89] found id: "f953c9c411dc66a0e299a0159b88b2797ec26bef99175c950d918713f0b5913c"
	I1210 06:15:37.597235  399954 cri.go:89] found id: "7e4d8e81695d5a307da305b92da855df1d3e4b373020a9cfdf7229f0fef82b24"
	I1210 06:15:37.597239  399954 cri.go:89] found id: "c5cec4543cf4d1934430ad2ce36ea404e30e808cc792d3f8c229bac5e073805b"
	I1210 06:15:37.597241  399954 cri.go:89] found id: "5670b137f9a4dbce31099780cdcda6f57ff0d8aaec66f5248f7e42b1d17ecb78"
	I1210 06:15:37.597244  399954 cri.go:89] found id: "e257321780848d3ffe855909a09763b99823d2b96edae7e378a5f63893b142e0"
	I1210 06:15:37.597247  399954 cri.go:89] found id: ""
	I1210 06:15:37.597290  399954 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:15:37.608723  399954 retry.go:31] will retry after 213.056687ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:15:37Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:15:37.822113  399954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:15:37.835639  399954 pause.go:52] kubelet running: false
	I1210 06:15:37.835696  399954 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:15:37.957758  399954 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:15:37.957825  399954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:15:38.034181  399954 cri.go:89] found id: "ca0dbe21353818bffb1a564f8fa31f330d4f4bf2e79e1f937f38d09a263b6de9"
	I1210 06:15:38.034206  399954 cri.go:89] found id: "f953c9c411dc66a0e299a0159b88b2797ec26bef99175c950d918713f0b5913c"
	I1210 06:15:38.034212  399954 cri.go:89] found id: "7e4d8e81695d5a307da305b92da855df1d3e4b373020a9cfdf7229f0fef82b24"
	I1210 06:15:38.034217  399954 cri.go:89] found id: "c5cec4543cf4d1934430ad2ce36ea404e30e808cc792d3f8c229bac5e073805b"
	I1210 06:15:38.034220  399954 cri.go:89] found id: "5670b137f9a4dbce31099780cdcda6f57ff0d8aaec66f5248f7e42b1d17ecb78"
	I1210 06:15:38.034231  399954 cri.go:89] found id: "e257321780848d3ffe855909a09763b99823d2b96edae7e378a5f63893b142e0"
	I1210 06:15:38.034236  399954 cri.go:89] found id: ""
	I1210 06:15:38.034286  399954 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:15:38.046020  399954 retry.go:31] will retry after 597.479009ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:15:38Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:15:38.644312  399954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:15:38.656952  399954 pause.go:52] kubelet running: false
	I1210 06:15:38.656996  399954 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:15:38.784309  399954 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:15:38.784397  399954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:15:38.857310  399954 cri.go:89] found id: "ca0dbe21353818bffb1a564f8fa31f330d4f4bf2e79e1f937f38d09a263b6de9"
	I1210 06:15:38.857335  399954 cri.go:89] found id: "f953c9c411dc66a0e299a0159b88b2797ec26bef99175c950d918713f0b5913c"
	I1210 06:15:38.857342  399954 cri.go:89] found id: "7e4d8e81695d5a307da305b92da855df1d3e4b373020a9cfdf7229f0fef82b24"
	I1210 06:15:38.857347  399954 cri.go:89] found id: "c5cec4543cf4d1934430ad2ce36ea404e30e808cc792d3f8c229bac5e073805b"
	I1210 06:15:38.857351  399954 cri.go:89] found id: "5670b137f9a4dbce31099780cdcda6f57ff0d8aaec66f5248f7e42b1d17ecb78"
	I1210 06:15:38.857355  399954 cri.go:89] found id: "e257321780848d3ffe855909a09763b99823d2b96edae7e378a5f63893b142e0"
	I1210 06:15:38.857358  399954 cri.go:89] found id: ""
	I1210 06:15:38.857397  399954 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:15:38.874029  399954 out.go:203] 
	W1210 06:15:38.875119  399954 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:15:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:15:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 06:15:38.875138  399954 out.go:285] * 
	* 
	W1210 06:15:38.879237  399954 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:15:38.885094  399954 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-218688 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-218688
helpers_test.go:244: (dbg) docker inspect newest-cni-218688:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "14958bae78d39469fc1fc1f95bca29ccbf6b7db0ff36525ccb2480de8418941e",
	        "Created": "2025-12-10T06:15:01.877568819Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 397199,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:15:25.378124411Z",
	            "FinishedAt": "2025-12-10T06:15:24.542848112Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/14958bae78d39469fc1fc1f95bca29ccbf6b7db0ff36525ccb2480de8418941e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/14958bae78d39469fc1fc1f95bca29ccbf6b7db0ff36525ccb2480de8418941e/hostname",
	        "HostsPath": "/var/lib/docker/containers/14958bae78d39469fc1fc1f95bca29ccbf6b7db0ff36525ccb2480de8418941e/hosts",
	        "LogPath": "/var/lib/docker/containers/14958bae78d39469fc1fc1f95bca29ccbf6b7db0ff36525ccb2480de8418941e/14958bae78d39469fc1fc1f95bca29ccbf6b7db0ff36525ccb2480de8418941e-json.log",
	        "Name": "/newest-cni-218688",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-218688:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-218688",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "14958bae78d39469fc1fc1f95bca29ccbf6b7db0ff36525ccb2480de8418941e",
	                "LowerDir": "/var/lib/docker/overlay2/31b2476d0f6ff5b94417c3ab5d997fc6f8760ed556372206950721b79dd71892-init/diff:/var/lib/docker/overlay2/b62e2f8db4877fd6b32453256d2aeab173581bfdfbed6c87a5c3b6dd49dbb983/diff",
	                "MergedDir": "/var/lib/docker/overlay2/31b2476d0f6ff5b94417c3ab5d997fc6f8760ed556372206950721b79dd71892/merged",
	                "UpperDir": "/var/lib/docker/overlay2/31b2476d0f6ff5b94417c3ab5d997fc6f8760ed556372206950721b79dd71892/diff",
	                "WorkDir": "/var/lib/docker/overlay2/31b2476d0f6ff5b94417c3ab5d997fc6f8760ed556372206950721b79dd71892/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-218688",
	                "Source": "/var/lib/docker/volumes/newest-cni-218688/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-218688",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-218688",
	                "name.minikube.sigs.k8s.io": "newest-cni-218688",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "19e56da472b76afdd2403facdc1c8abc12fd134806571d9c7033112e79317f25",
	            "SandboxKey": "/var/run/docker/netns/19e56da472b7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-218688": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1445af997c5684ad6e249fa16e019df4a952bdc0bbb87997d65034a6fd60980c",
	                    "EndpointID": "2f0f17b1b224322cd100c40bf7a39a3dedee3385ef3d88995d716941a02498b4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "aa:27:ef:93:7c:49",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-218688",
	                        "14958bae78d3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-218688 -n newest-cni-218688
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-218688 -n newest-cni-218688: exit status 2 (323.636606ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-218688 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ delete  │ -p disable-driver-mounts-569732                                                                                                                                                                                                                    │ disable-driver-mounts-569732 │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ start   │ -p default-k8s-diff-port-125336 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable metrics-server -p no-preload-468539 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ stop    │ -p no-preload-468539 --alsologtostderr -v=3                                                                                                                                                                                                        │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ addons  │ enable metrics-server -p embed-certs-028500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ stop    │ -p embed-certs-028500 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ addons  │ enable dashboard -p no-preload-468539 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ start   │ -p no-preload-468539 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:15 UTC │
	│ image   │ old-k8s-version-725426 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ pause   │ -p old-k8s-version-725426 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ delete  │ -p old-k8s-version-725426                                                                                                                                                                                                                          │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ delete  │ -p old-k8s-version-725426                                                                                                                                                                                                                          │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ start   │ -p newest-cni-218688 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable dashboard -p embed-certs-028500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ start   │ -p embed-certs-028500 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-125336 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-125336 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable metrics-server -p newest-cni-218688 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ stop    │ -p newest-cni-218688 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable dashboard -p newest-cni-218688 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ start   │ -p newest-cni-218688 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-125336 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ start   │ -p default-k8s-diff-port-125336 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ image   │ newest-cni-218688 image list --format=json                                                                                                                                                                                                         │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ pause   │ -p newest-cni-218688 --alsologtostderr -v=1                                                                                                                                                                                                        │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:15:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:15:34.136263  398989 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:15:34.136365  398989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:15:34.136370  398989 out.go:374] Setting ErrFile to fd 2...
	I1210 06:15:34.136374  398989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:15:34.136589  398989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 06:15:34.137019  398989 out.go:368] Setting JSON to false
	I1210 06:15:34.138324  398989 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3478,"bootTime":1765343856,"procs":474,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:15:34.138383  398989 start.go:143] virtualization: kvm guest
	I1210 06:15:34.140369  398989 out.go:179] * [default-k8s-diff-port-125336] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:15:34.141455  398989 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:15:34.141495  398989 notify.go:221] Checking for updates...
	I1210 06:15:34.144149  398989 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:15:34.145219  398989 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:15:34.146212  398989 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube
	I1210 06:15:34.147189  398989 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:15:34.148570  398989 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:15:34.150487  398989 config.go:182] Loaded profile config "default-k8s-diff-port-125336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:15:34.151311  398989 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:15:34.181230  398989 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:15:34.181357  398989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:15:34.246485  398989 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 06:15:34.23498397 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:15:34.246649  398989 docker.go:319] overlay module found
	I1210 06:15:34.248892  398989 out.go:179] * Using the docker driver based on existing profile
	I1210 06:15:34.250044  398989 start.go:309] selected driver: docker
	I1210 06:15:34.250071  398989 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-125336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:15:34.250210  398989 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:15:34.250813  398989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:15:34.316341  398989 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 06:15:34.305292083 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:15:34.316682  398989 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:15:34.316710  398989 cni.go:84] Creating CNI manager for ""
	I1210 06:15:34.316776  398989 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:15:34.316830  398989 start.go:353] cluster config:
	{Name:default-k8s-diff-port-125336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:15:34.318321  398989 out.go:179] * Starting "default-k8s-diff-port-125336" primary control-plane node in "default-k8s-diff-port-125336" cluster
	I1210 06:15:34.319196  398989 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:15:34.320175  398989 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:15:34.321155  398989 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 06:15:34.321256  398989 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	W1210 06:15:34.344393  398989 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1210 06:15:34.347229  398989 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:15:34.347250  398989 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 06:15:34.430385  398989 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1210 06:15:34.430536  398989 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/config.json ...
	I1210 06:15:34.430685  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:34.430831  398989 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:15:34.430871  398989 start.go:360] acquireMachinesLock for default-k8s-diff-port-125336: {Name:mk1b9a5beba896eecc2201d27beab95b8159d676 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.430953  398989 start.go:364] duration metric: took 37.573µs to acquireMachinesLock for "default-k8s-diff-port-125336"
	I1210 06:15:34.430971  398989 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:15:34.430976  398989 fix.go:54] fixHost starting: 
	I1210 06:15:34.431250  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:34.454438  398989 fix.go:112] recreateIfNeeded on default-k8s-diff-port-125336: state=Stopped err=<nil>
	W1210 06:15:34.454482  398989 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:15:33.023453  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 06:15:33.023497  396996 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 06:15:33.023579  396996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:33.044470  396996 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:15:33.044498  396996 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:15:33.044561  396996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:33.055221  396996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:33.060071  396996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:33.070394  396996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:33.143159  396996 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:15:33.157435  396996 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:15:33.157507  396996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:15:33.170632  396996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:15:33.171889  396996 api_server.go:72] duration metric: took 184.694932ms to wait for apiserver process to appear ...
	I1210 06:15:33.171914  396996 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:15:33.171932  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:33.175983  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 06:15:33.176026  396996 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 06:15:33.187123  396996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:15:33.192327  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 06:15:33.192345  396996 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 06:15:33.208241  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 06:15:33.208263  396996 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 06:15:33.223466  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 06:15:33.223489  396996 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 06:15:33.239352  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 06:15:33.239373  396996 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 06:15:33.254731  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 06:15:33.254747  396996 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 06:15:33.268149  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 06:15:33.268164  396996 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 06:15:33.281962  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 06:15:33.281981  396996 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 06:15:33.294762  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:15:33.294777  396996 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 06:15:33.308261  396996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:15:34.066152  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 06:15:34.066176  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 06:15:34.066192  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:34.079065  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 06:15:34.079117  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 06:15:34.172751  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:34.179376  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:34.179407  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:34.672823  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:34.677978  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:34.678023  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:34.680262  396996 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.509569955s)
	I1210 06:15:34.680319  396996 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.493167455s)
	I1210 06:15:34.680472  396996 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.372172224s)
	I1210 06:15:34.684547  396996 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-218688 addons enable metrics-server
	
	I1210 06:15:34.693826  396996 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1210 06:15:34.695479  396996 addons.go:530] duration metric: took 1.708260214s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1210 06:15:35.172871  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:35.178128  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:35.178152  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:35.672391  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:35.676418  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1210 06:15:35.677341  396996 api_server.go:141] control plane version: v1.35.0-rc.1
	I1210 06:15:35.677363  396996 api_server.go:131] duration metric: took 2.505442988s to wait for apiserver health ...
	I1210 06:15:35.677373  396996 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:15:35.680615  396996 system_pods.go:59] 8 kube-system pods found
	I1210 06:15:35.680642  396996 system_pods.go:61] "coredns-7d764666f9-44pd7" [59f9ee36-231a-4116-a88e-60d48b054690] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1210 06:15:35.680651  396996 system_pods.go:61] "etcd-newest-cni-218688" [c27a2601-2917-44f3-966c-b554d5b92c02] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:15:35.680657  396996 system_pods.go:61] "kindnet-n75st" [33becf6b-71b4-4682-81bc-c41d280389e3] Running
	I1210 06:15:35.680665  396996 system_pods.go:61] "kube-apiserver-newest-cni-218688" [a423257c-9365-4560-865a-9de59f0aafeb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:15:35.680674  396996 system_pods.go:61] "kube-controller-manager-newest-cni-218688" [5a19eab1-194c-4d33-9aa6-5cce8ba87a10] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:15:35.680682  396996 system_pods.go:61] "kube-proxy-tlj9s" [3ff684af-caff-4db8-991a-8ba99fe5f326] Running
	I1210 06:15:35.680687  396996 system_pods.go:61] "kube-scheduler-newest-cni-218688" [8063cc2c-8c98-4490-94af-1613e4881229] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:15:35.680698  396996 system_pods.go:61] "storage-provisioner" [a10bfb27-694c-4654-a067-8f36fe743de7] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1210 06:15:35.680705  396996 system_pods.go:74] duration metric: took 3.328176ms to wait for pod list to return data ...
	I1210 06:15:35.680714  396996 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:15:35.682837  396996 default_sa.go:45] found service account: "default"
	I1210 06:15:35.682855  396996 default_sa.go:55] duration metric: took 2.134837ms for default service account to be created ...
	I1210 06:15:35.682865  396996 kubeadm.go:587] duration metric: took 2.695675575s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 06:15:35.682879  396996 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:15:35.684913  396996 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:15:35.684939  396996 node_conditions.go:123] node cpu capacity is 8
	I1210 06:15:35.684951  396996 node_conditions.go:105] duration metric: took 2.068174ms to run NodePressure ...
	I1210 06:15:35.684962  396996 start.go:242] waiting for startup goroutines ...
	I1210 06:15:35.684968  396996 start.go:247] waiting for cluster config update ...
	I1210 06:15:35.684977  396996 start.go:256] writing updated cluster config ...
	I1210 06:15:35.685255  396996 ssh_runner.go:195] Run: rm -f paused
	I1210 06:15:35.731197  396996 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-rc.1 (minor skew: 1)
	I1210 06:15:35.733185  396996 out.go:179] * Done! kubectl is now configured to use "newest-cni-218688" cluster and "default" namespace by default
	W1210 06:15:33.147258  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	W1210 06:15:35.148317  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	I1210 06:15:34.458179  398989 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-125336" ...
	I1210 06:15:34.458256  398989 cli_runner.go:164] Run: docker start default-k8s-diff-port-125336
	I1210 06:15:34.606122  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:34.751260  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:34.755727  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:34.772295  398989 kic.go:430] container "default-k8s-diff-port-125336" state is running.
	I1210 06:15:34.772778  398989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-125336
	I1210 06:15:34.795691  398989 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/config.json ...
	I1210 06:15:34.795975  398989 machine.go:94] provisionDockerMachine start ...
	I1210 06:15:34.796067  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:34.815579  398989 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:34.815958  398989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1210 06:15:34.815979  398989 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:15:34.816656  398989 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48068->127.0.0.1:33138: read: connection reset by peer
	I1210 06:15:34.895700  398989 cache.go:107] acquiring lock: {Name:mk0763a50664c56b0862900e71862307cba94d4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895740  398989 cache.go:107] acquiring lock: {Name:mkdd768341d1a3481ecaec697219b32d4a715834 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895735  398989 cache.go:107] acquiring lock: {Name:mkd670cede0997c7eb0e9bd388a82e1cb2741031 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895776  398989 cache.go:107] acquiring lock: {Name:mk4d792f4bac33dc8779d7cc5ff40393c94e0ea4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895776  398989 cache.go:107] acquiring lock: {Name:mkc3a95f67321b2fa8faeb966829fb60cf65d25d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895817  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:15:34.895824  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:15:34.895828  398989 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 146.45µs
	I1210 06:15:34.895834  398989 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 128.77µs
	I1210 06:15:34.895694  398989 cache.go:107] acquiring lock: {Name:mkcb073544c2d92de0e0765e38c37b4f4d2ac46b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895843  398989 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:15:34.895840  398989 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:15:34.895852  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 exists
	I1210 06:15:34.895700  398989 cache.go:107] acquiring lock: {Name:mk4839690ba979036496a7cee1de2814aaad3bf0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895863  398989 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3" took 181.132µs
	I1210 06:15:34.895880  398989 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 succeeded
	I1210 06:15:34.895908  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 exists
	I1210 06:15:34.895899  398989 cache.go:107] acquiring lock: {Name:mk796942baeaa838a47daad2be5ca7532234da42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895924  398989 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3" took 255.105µs
	I1210 06:15:34.895929  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 exists
	I1210 06:15:34.895932  398989 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 succeeded
	I1210 06:15:34.895908  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 exists
	I1210 06:15:34.895944  398989 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3" took 265.291µs
	I1210 06:15:34.895951  398989 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3" took 211.334µs
	I1210 06:15:34.895966  398989 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 succeeded
	I1210 06:15:34.895972  398989 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 succeeded
	I1210 06:15:34.895982  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1210 06:15:34.895990  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1210 06:15:34.895996  398989 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 258.502µs
	I1210 06:15:34.895999  398989 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 139.654µs
	I1210 06:15:34.896008  398989 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1210 06:15:34.896011  398989 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1210 06:15:34.896019  398989 cache.go:87] Successfully saved all images to host disk.
	I1210 06:15:37.959177  398989 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-125336
	
	I1210 06:15:37.959204  398989 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-125336"
	I1210 06:15:37.959258  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:37.979224  398989 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:37.979665  398989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1210 06:15:37.979696  398989 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-125336 && echo "default-k8s-diff-port-125336" | sudo tee /etc/hostname
	I1210 06:15:38.128128  398989 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-125336
	
	I1210 06:15:38.128197  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:38.146305  398989 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:38.146620  398989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1210 06:15:38.146653  398989 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-125336' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-125336/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-125336' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:15:38.278124  398989 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:15:38.278149  398989 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-5725/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-5725/.minikube}
	I1210 06:15:38.278167  398989 ubuntu.go:190] setting up certificates
	I1210 06:15:38.278176  398989 provision.go:84] configureAuth start
	I1210 06:15:38.278222  398989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-125336
	I1210 06:15:38.296606  398989 provision.go:143] copyHostCerts
	I1210 06:15:38.296674  398989 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem, removing ...
	I1210 06:15:38.296692  398989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem
	I1210 06:15:38.296785  398989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem (1078 bytes)
	I1210 06:15:38.296919  398989 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem, removing ...
	I1210 06:15:38.296932  398989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem
	I1210 06:15:38.296972  398989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem (1123 bytes)
	I1210 06:15:38.297072  398989 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem, removing ...
	I1210 06:15:38.297098  398989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem
	I1210 06:15:38.297140  398989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem (1679 bytes)
	I1210 06:15:38.297233  398989 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-125336 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-125336 localhost minikube]
	I1210 06:15:38.401725  398989 provision.go:177] copyRemoteCerts
	I1210 06:15:38.401781  398989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:15:38.401814  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:38.419489  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:38.515784  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:15:38.532680  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1210 06:15:38.549493  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:15:38.565601  398989 provision.go:87] duration metric: took 287.41ms to configureAuth
	I1210 06:15:38.565627  398989 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:15:38.565820  398989 config.go:182] Loaded profile config "default-k8s-diff-port-125336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:15:38.565943  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:38.583842  398989 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:38.584037  398989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1210 06:15:38.584055  398989 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:15:38.911289  398989 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:15:38.911317  398989 machine.go:97] duration metric: took 4.115324474s to provisionDockerMachine
	I1210 06:15:38.911331  398989 start.go:293] postStartSetup for "default-k8s-diff-port-125336" (driver="docker")
	I1210 06:15:38.911344  398989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:15:38.911417  398989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:15:38.911463  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:38.932694  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:39.032024  398989 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:15:39.035849  398989 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:15:39.035874  398989 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:15:39.035883  398989 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/addons for local assets ...
	I1210 06:15:39.035933  398989 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/files for local assets ...
	I1210 06:15:39.036028  398989 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem -> 92532.pem in /etc/ssl/certs
	I1210 06:15:39.036160  398989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:15:39.044513  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:15:39.061424  398989 start.go:296] duration metric: took 150.067555ms for postStartSetup
	I1210 06:15:39.061507  398989 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:15:39.061554  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:39.080318  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	
	
	==> CRI-O <==
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.640099655Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.640983746Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=9b505d4f-7833-4004-83c9-f3da97942b6c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.643218803Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.644405124Z" level=info msg="Ran pod sandbox 229c72af0ec599f33061ddac85fdf73521f4a3c4fd7c8d5211eacf8eb0df4f3e with infra container: kube-system/kindnet-n75st/POD" id=9b505d4f-7833-4004-83c9-f3da97942b6c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.644603226Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=dc7f0f89-2738-43ce-9f43-af24efdd6860 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.650905128Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.651013013Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=74176821-be7a-465e-85c9-4e124c17854f name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.652882456Z" level=info msg="Ran pod sandbox 5c3e154590ba38f6c8e0e03cd8add51aff4709dce45b1d3d0509393684262fe4 with infra container: kube-system/kube-proxy-tlj9s/POD" id=dc7f0f89-2738-43ce-9f43-af24efdd6860 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.653397784Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=2a0418fb-4dfb-4091-8aed-3cc2a8b19d03 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.654274791Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=ce6ca5bd-d28f-4c55-b02b-1db538dc818f name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.654893031Z" level=info msg="Creating container: kube-system/kindnet-n75st/kindnet-cni" id=6989e574-bff4-4241-bbf7-56a9a6760552 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.655170424Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.655344517Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=b2ff0a9b-0f86-47cc-a0fe-f3aeff7b4495 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.656332027Z" level=info msg="Creating container: kube-system/kube-proxy-tlj9s/kube-proxy" id=3322fc58-d97a-464d-85be-9ce381c0159e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.656467441Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.660193074Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.660774048Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.663365306Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.66394943Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.696959407Z" level=info msg="Created container f953c9c411dc66a0e299a0159b88b2797ec26bef99175c950d918713f0b5913c: kube-system/kindnet-n75st/kindnet-cni" id=6989e574-bff4-4241-bbf7-56a9a6760552 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.697538694Z" level=info msg="Starting container: f953c9c411dc66a0e299a0159b88b2797ec26bef99175c950d918713f0b5913c" id=a67efdf1-87b2-4819-ae11-9f76346736ce name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.699297464Z" level=info msg="Started container" PID=1057 containerID=f953c9c411dc66a0e299a0159b88b2797ec26bef99175c950d918713f0b5913c description=kube-system/kindnet-n75st/kindnet-cni id=a67efdf1-87b2-4819-ae11-9f76346736ce name=/runtime.v1.RuntimeService/StartContainer sandboxID=229c72af0ec599f33061ddac85fdf73521f4a3c4fd7c8d5211eacf8eb0df4f3e
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.699938814Z" level=info msg="Created container ca0dbe21353818bffb1a564f8fa31f330d4f4bf2e79e1f937f38d09a263b6de9: kube-system/kube-proxy-tlj9s/kube-proxy" id=3322fc58-d97a-464d-85be-9ce381c0159e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.700490747Z" level=info msg="Starting container: ca0dbe21353818bffb1a564f8fa31f330d4f4bf2e79e1f937f38d09a263b6de9" id=a959fd20-8ef5-4e2f-a338-2190f045a119 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.703883078Z" level=info msg="Started container" PID=1058 containerID=ca0dbe21353818bffb1a564f8fa31f330d4f4bf2e79e1f937f38d09a263b6de9 description=kube-system/kube-proxy-tlj9s/kube-proxy id=a959fd20-8ef5-4e2f-a338-2190f045a119 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5c3e154590ba38f6c8e0e03cd8add51aff4709dce45b1d3d0509393684262fe4
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	ca0dbe2135381       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a   5 seconds ago       Running             kube-proxy                1                   5c3e154590ba3       kube-proxy-tlj9s                            kube-system
	f953c9c411dc6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   5 seconds ago       Running             kindnet-cni               1                   229c72af0ec59       kindnet-n75st                               kube-system
	7e4d8e81695d5       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc   7 seconds ago       Running             kube-scheduler            1                   c821e1403cb5e       kube-scheduler-newest-cni-218688            kube-system
	c5cec4543cf4d       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614   7 seconds ago       Running             kube-controller-manager   1                   f7d1217d616b9       kube-controller-manager-newest-cni-218688   kube-system
	5670b137f9a4d       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2   7 seconds ago       Running             etcd                      1                   397a991dc17ce       etcd-newest-cni-218688                      kube-system
	e257321780848       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce   7 seconds ago       Running             kube-apiserver            1                   a7208a3affb7a       kube-apiserver-newest-cni-218688            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-218688
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-218688
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602
	                    minikube.k8s.io/name=newest-cni-218688
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_15_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:15:11 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-218688
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:15:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:15:34 +0000   Wed, 10 Dec 2025 06:15:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:15:34 +0000   Wed, 10 Dec 2025 06:15:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:15:34 +0000   Wed, 10 Dec 2025 06:15:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 10 Dec 2025 06:15:34 +0000   Wed, 10 Dec 2025 06:15:10 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-218688
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                aad1f9dc-7291-4e51-a2e1-9457e223453b
	  Boot ID:                    b1b789e7-29ca-41f0-9541-8c4ef16372aa
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-218688                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         25s
	  kube-system                 kindnet-n75st                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      20s
	  kube-system                 kube-apiserver-newest-cni-218688             250m (3%)     0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-controller-manager-newest-cni-218688    200m (2%)     0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-proxy-tlj9s                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         20s
	  kube-system                 kube-scheduler-newest-cni-218688             100m (1%)     0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  21s   node-controller  Node newest-cni-218688 event: Registered Node newest-cni-218688 in Controller
	  Normal  RegisteredNode  2s    node-controller  Node newest-cni-218688 event: Registered Node newest-cni-218688 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e ac 6a 3a 10 14 08 06
	[  +0.000389] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e1 45 1e 59 dc 08 06
	[ +12.231886] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff aa b6 c3 b5 b8 e1 08 06
	[  +0.018522] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff f6 5b 96 ba 91 6c 08 06
	[Dec10 06:13] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 91 ba 36 9a ae 08 06
	[  +0.002987] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 7f a1 c5 f7 73 08 06
	[  +1.205570] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 04 b2 ab d7 49 08 06
	[  +4.623767] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 10 2d 23 5f e6 08 06
	[  +0.000315] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 5b 96 ba 91 6c 08 06
	[ +12.537493] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 fa d0 2a 46 66 08 06
	[  +0.000395] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 16 04 b2 ab d7 49 08 06
	[ +31.413502] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 1b 61 8f e3 57 08 06
	[  +0.000352] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fa 91 ba 36 9a ae 08 06
	
	
	==> etcd [5670b137f9a4dbce31099780cdcda6f57ff0d8aaec66f5248f7e42b1d17ecb78] <==
	{"level":"info","ts":"2025-12-10T06:15:32.904232Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-10T06:15:32.904587Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-10T06:15:32.904654Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-10T06:15:32.904946Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-10T06:15:32.905108Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-10T06:15:32.905237Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-10T06:15:32.905295Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-10T06:15:32.995341Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-10T06:15:32.995401Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-10T06:15:32.995464Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-10T06:15:32.995483Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-10T06:15:32.995500Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-10T06:15:32.996068Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-10T06:15:32.996715Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-10T06:15:32.996805Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-10T06:15:32.996867Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-10T06:15:32.997933Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-10T06:15:32.998067Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-10T06:15:32.997854Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-218688 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-10T06:15:32.998427Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-10T06:15:32.998452Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-10T06:15:32.999279Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-10T06:15:33.000069Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-10T06:15:33.006764Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-10T06:15:33.009239Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 06:15:40 up 58 min,  0 user,  load average: 4.56, 4.51, 2.95
	Linux newest-cni-218688 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f953c9c411dc66a0e299a0159b88b2797ec26bef99175c950d918713f0b5913c] <==
	I1210 06:15:34.948214       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 06:15:34.948536       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1210 06:15:34.948703       1 main.go:148] setting mtu 1500 for CNI 
	I1210 06:15:34.948722       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 06:15:34.948752       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T06:15:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 06:15:35.151700       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 06:15:35.151836       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 06:15:35.151913       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 06:15:35.152581       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 06:15:35.453290       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 06:15:35.453320       1 metrics.go:72] Registering metrics
	I1210 06:15:35.453363       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [e257321780848d3ffe855909a09763b99823d2b96edae7e378a5f63893b142e0] <==
	I1210 06:15:34.141918       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 06:15:34.141927       1 cache.go:39] Caches are synced for autoregister controller
	I1210 06:15:34.142126       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:34.142164       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1210 06:15:34.142654       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1210 06:15:34.143021       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1210 06:15:34.143043       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:34.150967       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1210 06:15:34.153482       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1210 06:15:34.185966       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1210 06:15:34.192716       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:34.192736       1 policy_source.go:248] refreshing policies
	I1210 06:15:34.284995       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:15:34.361788       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 06:15:34.441960       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 06:15:34.488729       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 06:15:34.509802       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:15:34.521754       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:15:34.560368       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.117.103"}
	I1210 06:15:34.571617       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.206.102"}
	I1210 06:15:35.044718       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1210 06:15:37.636255       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:15:37.636301       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:15:37.735805       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 06:15:37.886334       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [c5cec4543cf4d1934430ad2ce36ea404e30e808cc792d3f8c229bac5e073805b] <==
	I1210 06:15:37.295060       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.294794       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.295177       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.295186       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.295203       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.295219       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1210 06:15:37.295267       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.295332       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.295419       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.295872       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.299393       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.299443       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.299453       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.299531       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.299568       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.299924       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.300131       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.302207       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.302463       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.311949       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-218688"
	I1210 06:15:37.312007       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1210 06:15:37.395033       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.395101       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.395115       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1210 06:15:37.395121       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [ca0dbe21353818bffb1a564f8fa31f330d4f4bf2e79e1f937f38d09a263b6de9] <==
	I1210 06:15:34.735685       1 server_linux.go:53] "Using iptables proxy"
	I1210 06:15:34.793330       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:15:34.894482       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:34.894519       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1210 06:15:34.894620       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:15:34.912493       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:15:34.912570       1 server_linux.go:136] "Using iptables Proxier"
	I1210 06:15:34.917629       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:15:34.917956       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1210 06:15:34.917972       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:15:34.919010       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:15:34.919034       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:15:34.919406       1 config.go:200] "Starting service config controller"
	I1210 06:15:34.919131       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:15:34.919617       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:15:34.919621       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:15:34.920220       1 config.go:309] "Starting node config controller"
	I1210 06:15:34.920237       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:15:34.920245       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:15:35.019982       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 06:15:35.020004       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 06:15:35.020029       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [7e4d8e81695d5a307da305b92da855df1d3e4b373020a9cfdf7229f0fef82b24] <==
	I1210 06:15:33.125861       1 serving.go:386] Generated self-signed cert in-memory
	W1210 06:15:34.069318       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1210 06:15:34.069368       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1210 06:15:34.069381       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1210 06:15:34.069391       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1210 06:15:34.098099       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1210 06:15:34.098194       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:15:34.100305       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:15:34.100356       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:15:34.100456       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 06:15:34.100649       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1210 06:15:34.201192       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 10 06:15:34 newest-cni-218688 kubelet[679]: I1210 06:15:34.327269     679 apiserver.go:52] "Watching apiserver"
	Dec 10 06:15:34 newest-cni-218688 kubelet[679]: I1210 06:15:34.334165     679 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 10 06:15:34 newest-cni-218688 kubelet[679]: I1210 06:15:34.359074     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33becf6b-71b4-4682-81bc-c41d280389e3-lib-modules\") pod \"kindnet-n75st\" (UID: \"33becf6b-71b4-4682-81bc-c41d280389e3\") " pod="kube-system/kindnet-n75st"
	Dec 10 06:15:34 newest-cni-218688 kubelet[679]: I1210 06:15:34.359279     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33becf6b-71b4-4682-81bc-c41d280389e3-xtables-lock\") pod \"kindnet-n75st\" (UID: \"33becf6b-71b4-4682-81bc-c41d280389e3\") " pod="kube-system/kindnet-n75st"
	Dec 10 06:15:34 newest-cni-218688 kubelet[679]: I1210 06:15:34.359333     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ff684af-caff-4db8-991a-8ba99fe5f326-xtables-lock\") pod \"kube-proxy-tlj9s\" (UID: \"3ff684af-caff-4db8-991a-8ba99fe5f326\") " pod="kube-system/kube-proxy-tlj9s"
	Dec 10 06:15:34 newest-cni-218688 kubelet[679]: I1210 06:15:34.359355     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ff684af-caff-4db8-991a-8ba99fe5f326-lib-modules\") pod \"kube-proxy-tlj9s\" (UID: \"3ff684af-caff-4db8-991a-8ba99fe5f326\") " pod="kube-system/kube-proxy-tlj9s"
	Dec 10 06:15:34 newest-cni-218688 kubelet[679]: I1210 06:15:34.359455     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/33becf6b-71b4-4682-81bc-c41d280389e3-cni-cfg\") pod \"kindnet-n75st\" (UID: \"33becf6b-71b4-4682-81bc-c41d280389e3\") " pod="kube-system/kindnet-n75st"
	Dec 10 06:15:34 newest-cni-218688 kubelet[679]: I1210 06:15:34.368630     679 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-218688"
	Dec 10 06:15:34 newest-cni-218688 kubelet[679]: I1210 06:15:34.368881     679 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-218688"
	Dec 10 06:15:34 newest-cni-218688 kubelet[679]: E1210 06:15:34.369057     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-218688" containerName="kube-controller-manager"
	Dec 10 06:15:34 newest-cni-218688 kubelet[679]: I1210 06:15:34.369341     679 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-218688"
	Dec 10 06:15:34 newest-cni-218688 kubelet[679]: E1210 06:15:34.383219     679 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-218688\" already exists" pod="kube-system/kube-scheduler-newest-cni-218688"
	Dec 10 06:15:34 newest-cni-218688 kubelet[679]: E1210 06:15:34.383292     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-218688" containerName="kube-scheduler"
	Dec 10 06:15:34 newest-cni-218688 kubelet[679]: E1210 06:15:34.384445     679 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-218688\" already exists" pod="kube-system/kube-apiserver-newest-cni-218688"
	Dec 10 06:15:34 newest-cni-218688 kubelet[679]: E1210 06:15:34.384544     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-218688" containerName="kube-apiserver"
	Dec 10 06:15:34 newest-cni-218688 kubelet[679]: E1210 06:15:34.384672     679 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-218688\" already exists" pod="kube-system/etcd-newest-cni-218688"
	Dec 10 06:15:34 newest-cni-218688 kubelet[679]: E1210 06:15:34.384740     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-218688" containerName="etcd"
	Dec 10 06:15:35 newest-cni-218688 kubelet[679]: E1210 06:15:35.374327     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-218688" containerName="kube-scheduler"
	Dec 10 06:15:35 newest-cni-218688 kubelet[679]: E1210 06:15:35.374428     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-218688" containerName="kube-apiserver"
	Dec 10 06:15:35 newest-cni-218688 kubelet[679]: E1210 06:15:35.374545     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-218688" containerName="etcd"
	Dec 10 06:15:36 newest-cni-218688 kubelet[679]: E1210 06:15:36.900658     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-218688" containerName="kube-controller-manager"
	Dec 10 06:15:37 newest-cni-218688 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 06:15:37 newest-cni-218688 kubelet[679]: I1210 06:15:37.140183     679 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 10 06:15:37 newest-cni-218688 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 06:15:37 newest-cni-218688 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-218688 -n newest-cni-218688
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-218688 -n newest-cni-218688: exit status 2 (329.30516ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-218688 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-44pd7 storage-provisioner dashboard-metrics-scraper-867fb5f87b-6xnrs kubernetes-dashboard-b84665fb8-7lvwx
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-218688 describe pod coredns-7d764666f9-44pd7 storage-provisioner dashboard-metrics-scraper-867fb5f87b-6xnrs kubernetes-dashboard-b84665fb8-7lvwx
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-218688 describe pod coredns-7d764666f9-44pd7 storage-provisioner dashboard-metrics-scraper-867fb5f87b-6xnrs kubernetes-dashboard-b84665fb8-7lvwx: exit status 1 (62.416485ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-44pd7" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-6xnrs" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-7lvwx" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-218688 describe pod coredns-7d764666f9-44pd7 storage-provisioner dashboard-metrics-scraper-867fb5f87b-6xnrs kubernetes-dashboard-b84665fb8-7lvwx: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-218688
helpers_test.go:244: (dbg) docker inspect newest-cni-218688:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "14958bae78d39469fc1fc1f95bca29ccbf6b7db0ff36525ccb2480de8418941e",
	        "Created": "2025-12-10T06:15:01.877568819Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 397199,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:15:25.378124411Z",
	            "FinishedAt": "2025-12-10T06:15:24.542848112Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/14958bae78d39469fc1fc1f95bca29ccbf6b7db0ff36525ccb2480de8418941e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/14958bae78d39469fc1fc1f95bca29ccbf6b7db0ff36525ccb2480de8418941e/hostname",
	        "HostsPath": "/var/lib/docker/containers/14958bae78d39469fc1fc1f95bca29ccbf6b7db0ff36525ccb2480de8418941e/hosts",
	        "LogPath": "/var/lib/docker/containers/14958bae78d39469fc1fc1f95bca29ccbf6b7db0ff36525ccb2480de8418941e/14958bae78d39469fc1fc1f95bca29ccbf6b7db0ff36525ccb2480de8418941e-json.log",
	        "Name": "/newest-cni-218688",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-218688:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-218688",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "14958bae78d39469fc1fc1f95bca29ccbf6b7db0ff36525ccb2480de8418941e",
	                "LowerDir": "/var/lib/docker/overlay2/31b2476d0f6ff5b94417c3ab5d997fc6f8760ed556372206950721b79dd71892-init/diff:/var/lib/docker/overlay2/b62e2f8db4877fd6b32453256d2aeab173581bfdfbed6c87a5c3b6dd49dbb983/diff",
	                "MergedDir": "/var/lib/docker/overlay2/31b2476d0f6ff5b94417c3ab5d997fc6f8760ed556372206950721b79dd71892/merged",
	                "UpperDir": "/var/lib/docker/overlay2/31b2476d0f6ff5b94417c3ab5d997fc6f8760ed556372206950721b79dd71892/diff",
	                "WorkDir": "/var/lib/docker/overlay2/31b2476d0f6ff5b94417c3ab5d997fc6f8760ed556372206950721b79dd71892/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-218688",
	                "Source": "/var/lib/docker/volumes/newest-cni-218688/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-218688",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-218688",
	                "name.minikube.sigs.k8s.io": "newest-cni-218688",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "19e56da472b76afdd2403facdc1c8abc12fd134806571d9c7033112e79317f25",
	            "SandboxKey": "/var/run/docker/netns/19e56da472b7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-218688": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1445af997c5684ad6e249fa16e019df4a952bdc0bbb87997d65034a6fd60980c",
	                    "EndpointID": "2f0f17b1b224322cd100c40bf7a39a3dedee3385ef3d88995d716941a02498b4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "aa:27:ef:93:7c:49",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-218688",
	                        "14958bae78d3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-218688 -n newest-cni-218688
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-218688 -n newest-cni-218688: exit status 2 (319.943227ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-218688 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-218688 logs -n 25: (1.040179674s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ delete  │ -p disable-driver-mounts-569732                                                                                                                                                                                                                    │ disable-driver-mounts-569732 │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ start   │ -p default-k8s-diff-port-125336 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable metrics-server -p no-preload-468539 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ stop    │ -p no-preload-468539 --alsologtostderr -v=3                                                                                                                                                                                                        │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ addons  │ enable metrics-server -p embed-certs-028500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ stop    │ -p embed-certs-028500 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ addons  │ enable dashboard -p no-preload-468539 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ start   │ -p no-preload-468539 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:15 UTC │
	│ image   │ old-k8s-version-725426 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ pause   │ -p old-k8s-version-725426 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ delete  │ -p old-k8s-version-725426                                                                                                                                                                                                                          │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ delete  │ -p old-k8s-version-725426                                                                                                                                                                                                                          │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ start   │ -p newest-cni-218688 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable dashboard -p embed-certs-028500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ start   │ -p embed-certs-028500 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-125336 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-125336 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable metrics-server -p newest-cni-218688 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ stop    │ -p newest-cni-218688 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable dashboard -p newest-cni-218688 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ start   │ -p newest-cni-218688 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-125336 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ start   │ -p default-k8s-diff-port-125336 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ image   │ newest-cni-218688 image list --format=json                                                                                                                                                                                                         │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ pause   │ -p newest-cni-218688 --alsologtostderr -v=1                                                                                                                                                                                                        │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:15:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:15:34.136263  398989 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:15:34.136365  398989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:15:34.136370  398989 out.go:374] Setting ErrFile to fd 2...
	I1210 06:15:34.136374  398989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:15:34.136589  398989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 06:15:34.137019  398989 out.go:368] Setting JSON to false
	I1210 06:15:34.138324  398989 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3478,"bootTime":1765343856,"procs":474,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:15:34.138383  398989 start.go:143] virtualization: kvm guest
	I1210 06:15:34.140369  398989 out.go:179] * [default-k8s-diff-port-125336] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:15:34.141455  398989 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:15:34.141495  398989 notify.go:221] Checking for updates...
	I1210 06:15:34.144149  398989 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:15:34.145219  398989 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:15:34.146212  398989 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube
	I1210 06:15:34.147189  398989 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:15:34.148570  398989 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:15:34.150487  398989 config.go:182] Loaded profile config "default-k8s-diff-port-125336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:15:34.151311  398989 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:15:34.181230  398989 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:15:34.181357  398989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:15:34.246485  398989 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 06:15:34.23498397 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:15:34.246649  398989 docker.go:319] overlay module found
	I1210 06:15:34.248892  398989 out.go:179] * Using the docker driver based on existing profile
	I1210 06:15:34.250044  398989 start.go:309] selected driver: docker
	I1210 06:15:34.250071  398989 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-125336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:15:34.250210  398989 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:15:34.250813  398989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:15:34.316341  398989 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 06:15:34.305292083 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:15:34.316682  398989 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:15:34.316710  398989 cni.go:84] Creating CNI manager for ""
	I1210 06:15:34.316776  398989 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:15:34.316830  398989 start.go:353] cluster config:
	{Name:default-k8s-diff-port-125336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:15:34.318321  398989 out.go:179] * Starting "default-k8s-diff-port-125336" primary control-plane node in "default-k8s-diff-port-125336" cluster
	I1210 06:15:34.319196  398989 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:15:34.320175  398989 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:15:34.321155  398989 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 06:15:34.321256  398989 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	W1210 06:15:34.344393  398989 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1210 06:15:34.347229  398989 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:15:34.347250  398989 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 06:15:34.430385  398989 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1210 06:15:34.430536  398989 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/config.json ...
	I1210 06:15:34.430685  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:34.430831  398989 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:15:34.430871  398989 start.go:360] acquireMachinesLock for default-k8s-diff-port-125336: {Name:mk1b9a5beba896eecc2201d27beab95b8159d676 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.430953  398989 start.go:364] duration metric: took 37.573µs to acquireMachinesLock for "default-k8s-diff-port-125336"
	I1210 06:15:34.430971  398989 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:15:34.430976  398989 fix.go:54] fixHost starting: 
	I1210 06:15:34.431250  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:34.454438  398989 fix.go:112] recreateIfNeeded on default-k8s-diff-port-125336: state=Stopped err=<nil>
	W1210 06:15:34.454482  398989 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:15:33.023453  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 06:15:33.023497  396996 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 06:15:33.023579  396996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:33.044470  396996 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:15:33.044498  396996 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:15:33.044561  396996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:33.055221  396996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:33.060071  396996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:33.070394  396996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:33.143159  396996 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:15:33.157435  396996 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:15:33.157507  396996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:15:33.170632  396996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:15:33.171889  396996 api_server.go:72] duration metric: took 184.694932ms to wait for apiserver process to appear ...
	I1210 06:15:33.171914  396996 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:15:33.171932  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:33.175983  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 06:15:33.176026  396996 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 06:15:33.187123  396996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:15:33.192327  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 06:15:33.192345  396996 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 06:15:33.208241  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 06:15:33.208263  396996 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 06:15:33.223466  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 06:15:33.223489  396996 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 06:15:33.239352  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 06:15:33.239373  396996 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 06:15:33.254731  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 06:15:33.254747  396996 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 06:15:33.268149  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 06:15:33.268164  396996 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 06:15:33.281962  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 06:15:33.281981  396996 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 06:15:33.294762  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:15:33.294777  396996 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 06:15:33.308261  396996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:15:34.066152  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 06:15:34.066176  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 06:15:34.066192  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:34.079065  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 06:15:34.079117  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 06:15:34.172751  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:34.179376  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:34.179407  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:34.672823  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:34.677978  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:34.678023  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:34.680262  396996 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.509569955s)
	I1210 06:15:34.680319  396996 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.493167455s)
	I1210 06:15:34.680472  396996 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.372172224s)
	I1210 06:15:34.684547  396996 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-218688 addons enable metrics-server
	
	I1210 06:15:34.693826  396996 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1210 06:15:34.695479  396996 addons.go:530] duration metric: took 1.708260214s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1210 06:15:35.172871  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:35.178128  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:35.178152  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:35.672391  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:35.676418  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1210 06:15:35.677341  396996 api_server.go:141] control plane version: v1.35.0-rc.1
	I1210 06:15:35.677363  396996 api_server.go:131] duration metric: took 2.505442988s to wait for apiserver health ...
	I1210 06:15:35.677373  396996 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:15:35.680615  396996 system_pods.go:59] 8 kube-system pods found
	I1210 06:15:35.680642  396996 system_pods.go:61] "coredns-7d764666f9-44pd7" [59f9ee36-231a-4116-a88e-60d48b054690] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1210 06:15:35.680651  396996 system_pods.go:61] "etcd-newest-cni-218688" [c27a2601-2917-44f3-966c-b554d5b92c02] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:15:35.680657  396996 system_pods.go:61] "kindnet-n75st" [33becf6b-71b4-4682-81bc-c41d280389e3] Running
	I1210 06:15:35.680665  396996 system_pods.go:61] "kube-apiserver-newest-cni-218688" [a423257c-9365-4560-865a-9de59f0aafeb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:15:35.680674  396996 system_pods.go:61] "kube-controller-manager-newest-cni-218688" [5a19eab1-194c-4d33-9aa6-5cce8ba87a10] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:15:35.680682  396996 system_pods.go:61] "kube-proxy-tlj9s" [3ff684af-caff-4db8-991a-8ba99fe5f326] Running
	I1210 06:15:35.680687  396996 system_pods.go:61] "kube-scheduler-newest-cni-218688" [8063cc2c-8c98-4490-94af-1613e4881229] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:15:35.680698  396996 system_pods.go:61] "storage-provisioner" [a10bfb27-694c-4654-a067-8f36fe743de7] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1210 06:15:35.680705  396996 system_pods.go:74] duration metric: took 3.328176ms to wait for pod list to return data ...
	I1210 06:15:35.680714  396996 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:15:35.682837  396996 default_sa.go:45] found service account: "default"
	I1210 06:15:35.682855  396996 default_sa.go:55] duration metric: took 2.134837ms for default service account to be created ...
	I1210 06:15:35.682865  396996 kubeadm.go:587] duration metric: took 2.695675575s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 06:15:35.682879  396996 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:15:35.684913  396996 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:15:35.684939  396996 node_conditions.go:123] node cpu capacity is 8
	I1210 06:15:35.684951  396996 node_conditions.go:105] duration metric: took 2.068174ms to run NodePressure ...
	I1210 06:15:35.684962  396996 start.go:242] waiting for startup goroutines ...
	I1210 06:15:35.684968  396996 start.go:247] waiting for cluster config update ...
	I1210 06:15:35.684977  396996 start.go:256] writing updated cluster config ...
	I1210 06:15:35.685255  396996 ssh_runner.go:195] Run: rm -f paused
	I1210 06:15:35.731197  396996 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-rc.1 (minor skew: 1)
	I1210 06:15:35.733185  396996 out.go:179] * Done! kubectl is now configured to use "newest-cni-218688" cluster and "default" namespace by default
	W1210 06:15:33.147258  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	W1210 06:15:35.148317  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	I1210 06:15:34.458179  398989 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-125336" ...
	I1210 06:15:34.458256  398989 cli_runner.go:164] Run: docker start default-k8s-diff-port-125336
	I1210 06:15:34.606122  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:34.751260  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:34.755727  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:34.772295  398989 kic.go:430] container "default-k8s-diff-port-125336" state is running.
	I1210 06:15:34.772778  398989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-125336
	I1210 06:15:34.795691  398989 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/config.json ...
	I1210 06:15:34.795975  398989 machine.go:94] provisionDockerMachine start ...
	I1210 06:15:34.796067  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:34.815579  398989 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:34.815958  398989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1210 06:15:34.815979  398989 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:15:34.816656  398989 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48068->127.0.0.1:33138: read: connection reset by peer
	I1210 06:15:34.895700  398989 cache.go:107] acquiring lock: {Name:mk0763a50664c56b0862900e71862307cba94d4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895740  398989 cache.go:107] acquiring lock: {Name:mkdd768341d1a3481ecaec697219b32d4a715834 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895735  398989 cache.go:107] acquiring lock: {Name:mkd670cede0997c7eb0e9bd388a82e1cb2741031 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895776  398989 cache.go:107] acquiring lock: {Name:mk4d792f4bac33dc8779d7cc5ff40393c94e0ea4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895776  398989 cache.go:107] acquiring lock: {Name:mkc3a95f67321b2fa8faeb966829fb60cf65d25d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895817  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:15:34.895824  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:15:34.895828  398989 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 146.45µs
	I1210 06:15:34.895834  398989 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 128.77µs
	I1210 06:15:34.895694  398989 cache.go:107] acquiring lock: {Name:mkcb073544c2d92de0e0765e38c37b4f4d2ac46b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895843  398989 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:15:34.895840  398989 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:15:34.895852  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 exists
	I1210 06:15:34.895700  398989 cache.go:107] acquiring lock: {Name:mk4839690ba979036496a7cee1de2814aaad3bf0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895863  398989 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3" took 181.132µs
	I1210 06:15:34.895880  398989 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 succeeded
	I1210 06:15:34.895908  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 exists
	I1210 06:15:34.895899  398989 cache.go:107] acquiring lock: {Name:mk796942baeaa838a47daad2be5ca7532234da42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895924  398989 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3" took 255.105µs
	I1210 06:15:34.895929  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 exists
	I1210 06:15:34.895932  398989 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 succeeded
	I1210 06:15:34.895908  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 exists
	I1210 06:15:34.895944  398989 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3" took 265.291µs
	I1210 06:15:34.895951  398989 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3" took 211.334µs
	I1210 06:15:34.895966  398989 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 succeeded
	I1210 06:15:34.895972  398989 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 succeeded
	I1210 06:15:34.895982  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1210 06:15:34.895990  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1210 06:15:34.895996  398989 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 258.502µs
	I1210 06:15:34.895999  398989 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 139.654µs
	I1210 06:15:34.896008  398989 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1210 06:15:34.896011  398989 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1210 06:15:34.896019  398989 cache.go:87] Successfully saved all images to host disk.
	I1210 06:15:37.959177  398989 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-125336
	
	I1210 06:15:37.959204  398989 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-125336"
	I1210 06:15:37.959258  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:37.979224  398989 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:37.979665  398989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1210 06:15:37.979696  398989 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-125336 && echo "default-k8s-diff-port-125336" | sudo tee /etc/hostname
	I1210 06:15:38.128128  398989 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-125336
	
	I1210 06:15:38.128197  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:38.146305  398989 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:38.146620  398989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1210 06:15:38.146653  398989 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-125336' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-125336/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-125336' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:15:38.278124  398989 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:15:38.278149  398989 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-5725/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-5725/.minikube}
	I1210 06:15:38.278167  398989 ubuntu.go:190] setting up certificates
	I1210 06:15:38.278176  398989 provision.go:84] configureAuth start
	I1210 06:15:38.278222  398989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-125336
	I1210 06:15:38.296606  398989 provision.go:143] copyHostCerts
	I1210 06:15:38.296674  398989 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem, removing ...
	I1210 06:15:38.296692  398989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem
	I1210 06:15:38.296785  398989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem (1078 bytes)
	I1210 06:15:38.296919  398989 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem, removing ...
	I1210 06:15:38.296932  398989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem
	I1210 06:15:38.296972  398989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem (1123 bytes)
	I1210 06:15:38.297072  398989 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem, removing ...
	I1210 06:15:38.297098  398989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem
	I1210 06:15:38.297140  398989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem (1679 bytes)
	I1210 06:15:38.297233  398989 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-125336 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-125336 localhost minikube]
	I1210 06:15:38.401725  398989 provision.go:177] copyRemoteCerts
	I1210 06:15:38.401781  398989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:15:38.401814  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:38.419489  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:38.515784  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:15:38.532680  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1210 06:15:38.549493  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:15:38.565601  398989 provision.go:87] duration metric: took 287.41ms to configureAuth
	I1210 06:15:38.565627  398989 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:15:38.565820  398989 config.go:182] Loaded profile config "default-k8s-diff-port-125336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:15:38.565943  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:38.583842  398989 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:38.584037  398989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1210 06:15:38.584055  398989 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:15:38.911289  398989 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:15:38.911317  398989 machine.go:97] duration metric: took 4.115324474s to provisionDockerMachine
	I1210 06:15:38.911331  398989 start.go:293] postStartSetup for "default-k8s-diff-port-125336" (driver="docker")
	I1210 06:15:38.911344  398989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:15:38.911417  398989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:15:38.911463  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:38.932694  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:39.032024  398989 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:15:39.035849  398989 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:15:39.035874  398989 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:15:39.035883  398989 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/addons for local assets ...
	I1210 06:15:39.035933  398989 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/files for local assets ...
	I1210 06:15:39.036028  398989 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem -> 92532.pem in /etc/ssl/certs
	I1210 06:15:39.036160  398989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:15:39.044513  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:15:39.061424  398989 start.go:296] duration metric: took 150.067555ms for postStartSetup
	I1210 06:15:39.061507  398989 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:15:39.061554  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:39.080318  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:39.174412  398989 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:15:39.179699  398989 fix.go:56] duration metric: took 4.748715142s for fixHost
	I1210 06:15:39.179726  398989 start.go:83] releasing machines lock for "default-k8s-diff-port-125336", held for 4.748759367s
	I1210 06:15:39.179795  398989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-125336
	I1210 06:15:39.198657  398989 ssh_runner.go:195] Run: cat /version.json
	I1210 06:15:39.198712  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:39.198747  398989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:15:39.198819  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:39.220204  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:39.220241  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:39.317475  398989 ssh_runner.go:195] Run: systemctl --version
	I1210 06:15:39.391108  398989 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:15:39.430876  398989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:15:39.435737  398989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:15:39.435812  398989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:15:39.444134  398989 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:15:39.444154  398989 start.go:496] detecting cgroup driver to use...
	I1210 06:15:39.444185  398989 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:15:39.444220  398989 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:15:39.458418  398989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:15:39.470158  398989 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:15:39.470210  398989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:15:39.485432  398989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:15:39.497705  398989 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:15:39.587848  398989 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:15:39.679325  398989 docker.go:234] disabling docker service ...
	I1210 06:15:39.679390  398989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:15:39.695744  398989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:15:39.710121  398989 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:15:39.803290  398989 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:15:39.889666  398989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:15:39.901841  398989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:15:39.916001  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:40.053859  398989 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:15:40.053907  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.064032  398989 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:15:40.064119  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.074052  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.082799  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.091069  398989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:15:40.099125  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.108348  398989 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.116442  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.124562  398989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:15:40.131659  398989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:15:40.139831  398989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:15:40.235238  398989 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:15:40.390045  398989 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:15:40.390127  398989 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:15:40.394019  398989 start.go:564] Will wait 60s for crictl version
	I1210 06:15:40.394073  398989 ssh_runner.go:195] Run: which crictl
	I1210 06:15:40.397521  398989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:15:40.422130  398989 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:15:40.422196  398989 ssh_runner.go:195] Run: crio --version
	I1210 06:15:40.449888  398989 ssh_runner.go:195] Run: crio --version
	I1210 06:15:40.482873  398989 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	
	
	==> CRI-O <==
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.640099655Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.640983746Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=9b505d4f-7833-4004-83c9-f3da97942b6c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.643218803Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.644405124Z" level=info msg="Ran pod sandbox 229c72af0ec599f33061ddac85fdf73521f4a3c4fd7c8d5211eacf8eb0df4f3e with infra container: kube-system/kindnet-n75st/POD" id=9b505d4f-7833-4004-83c9-f3da97942b6c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.644603226Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=dc7f0f89-2738-43ce-9f43-af24efdd6860 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.650905128Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.651013013Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=74176821-be7a-465e-85c9-4e124c17854f name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.652882456Z" level=info msg="Ran pod sandbox 5c3e154590ba38f6c8e0e03cd8add51aff4709dce45b1d3d0509393684262fe4 with infra container: kube-system/kube-proxy-tlj9s/POD" id=dc7f0f89-2738-43ce-9f43-af24efdd6860 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.653397784Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=2a0418fb-4dfb-4091-8aed-3cc2a8b19d03 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.654274791Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=ce6ca5bd-d28f-4c55-b02b-1db538dc818f name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.654893031Z" level=info msg="Creating container: kube-system/kindnet-n75st/kindnet-cni" id=6989e574-bff4-4241-bbf7-56a9a6760552 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.655170424Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.655344517Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=b2ff0a9b-0f86-47cc-a0fe-f3aeff7b4495 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.656332027Z" level=info msg="Creating container: kube-system/kube-proxy-tlj9s/kube-proxy" id=3322fc58-d97a-464d-85be-9ce381c0159e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.656467441Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.660193074Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.660774048Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.663365306Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.66394943Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.696959407Z" level=info msg="Created container f953c9c411dc66a0e299a0159b88b2797ec26bef99175c950d918713f0b5913c: kube-system/kindnet-n75st/kindnet-cni" id=6989e574-bff4-4241-bbf7-56a9a6760552 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.697538694Z" level=info msg="Starting container: f953c9c411dc66a0e299a0159b88b2797ec26bef99175c950d918713f0b5913c" id=a67efdf1-87b2-4819-ae11-9f76346736ce name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.699297464Z" level=info msg="Started container" PID=1057 containerID=f953c9c411dc66a0e299a0159b88b2797ec26bef99175c950d918713f0b5913c description=kube-system/kindnet-n75st/kindnet-cni id=a67efdf1-87b2-4819-ae11-9f76346736ce name=/runtime.v1.RuntimeService/StartContainer sandboxID=229c72af0ec599f33061ddac85fdf73521f4a3c4fd7c8d5211eacf8eb0df4f3e
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.699938814Z" level=info msg="Created container ca0dbe21353818bffb1a564f8fa31f330d4f4bf2e79e1f937f38d09a263b6de9: kube-system/kube-proxy-tlj9s/kube-proxy" id=3322fc58-d97a-464d-85be-9ce381c0159e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.700490747Z" level=info msg="Starting container: ca0dbe21353818bffb1a564f8fa31f330d4f4bf2e79e1f937f38d09a263b6de9" id=a959fd20-8ef5-4e2f-a338-2190f045a119 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:15:34 newest-cni-218688 crio[524]: time="2025-12-10T06:15:34.703883078Z" level=info msg="Started container" PID=1058 containerID=ca0dbe21353818bffb1a564f8fa31f330d4f4bf2e79e1f937f38d09a263b6de9 description=kube-system/kube-proxy-tlj9s/kube-proxy id=a959fd20-8ef5-4e2f-a338-2190f045a119 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5c3e154590ba38f6c8e0e03cd8add51aff4709dce45b1d3d0509393684262fe4
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	ca0dbe2135381       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a   6 seconds ago       Running             kube-proxy                1                   5c3e154590ba3       kube-proxy-tlj9s                            kube-system
	f953c9c411dc6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   229c72af0ec59       kindnet-n75st                               kube-system
	7e4d8e81695d5       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc   8 seconds ago       Running             kube-scheduler            1                   c821e1403cb5e       kube-scheduler-newest-cni-218688            kube-system
	c5cec4543cf4d       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614   8 seconds ago       Running             kube-controller-manager   1                   f7d1217d616b9       kube-controller-manager-newest-cni-218688   kube-system
	5670b137f9a4d       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2   8 seconds ago       Running             etcd                      1                   397a991dc17ce       etcd-newest-cni-218688                      kube-system
	e257321780848       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce   8 seconds ago       Running             kube-apiserver            1                   a7208a3affb7a       kube-apiserver-newest-cni-218688            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-218688
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-218688
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602
	                    minikube.k8s.io/name=newest-cni-218688
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_15_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:15:11 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-218688
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:15:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:15:34 +0000   Wed, 10 Dec 2025 06:15:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:15:34 +0000   Wed, 10 Dec 2025 06:15:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:15:34 +0000   Wed, 10 Dec 2025 06:15:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 10 Dec 2025 06:15:34 +0000   Wed, 10 Dec 2025 06:15:10 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-218688
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                aad1f9dc-7291-4e51-a2e1-9457e223453b
	  Boot ID:                    b1b789e7-29ca-41f0-9541-8c4ef16372aa
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-218688                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         27s
	  kube-system                 kindnet-n75st                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      22s
	  kube-system                 kube-apiserver-newest-cni-218688             250m (3%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-controller-manager-newest-cni-218688    200m (2%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-proxy-tlj9s                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  kube-system                 kube-scheduler-newest-cni-218688             100m (1%)     0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  23s   node-controller  Node newest-cni-218688 event: Registered Node newest-cni-218688 in Controller
	  Normal  RegisteredNode  4s    node-controller  Node newest-cni-218688 event: Registered Node newest-cni-218688 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e ac 6a 3a 10 14 08 06
	[  +0.000389] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e1 45 1e 59 dc 08 06
	[ +12.231886] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff aa b6 c3 b5 b8 e1 08 06
	[  +0.018522] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff f6 5b 96 ba 91 6c 08 06
	[Dec10 06:13] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 91 ba 36 9a ae 08 06
	[  +0.002987] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 7f a1 c5 f7 73 08 06
	[  +1.205570] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 04 b2 ab d7 49 08 06
	[  +4.623767] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 10 2d 23 5f e6 08 06
	[  +0.000315] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 5b 96 ba 91 6c 08 06
	[ +12.537493] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 fa d0 2a 46 66 08 06
	[  +0.000395] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 16 04 b2 ab d7 49 08 06
	[ +31.413502] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 1b 61 8f e3 57 08 06
	[  +0.000352] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fa 91 ba 36 9a ae 08 06
	
	
	==> etcd [5670b137f9a4dbce31099780cdcda6f57ff0d8aaec66f5248f7e42b1d17ecb78] <==
	{"level":"info","ts":"2025-12-10T06:15:32.904232Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-10T06:15:32.904587Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-10T06:15:32.904654Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-10T06:15:32.904946Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-10T06:15:32.905108Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-10T06:15:32.905237Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-10T06:15:32.905295Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-10T06:15:32.995341Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-10T06:15:32.995401Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-10T06:15:32.995464Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-10T06:15:32.995483Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-10T06:15:32.995500Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-10T06:15:32.996068Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-10T06:15:32.996715Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-10T06:15:32.996805Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-10T06:15:32.996867Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-10T06:15:32.997933Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-10T06:15:32.998067Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-10T06:15:32.997854Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-218688 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-10T06:15:32.998427Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-10T06:15:32.998452Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-10T06:15:32.999279Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-10T06:15:33.000069Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-10T06:15:33.006764Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-10T06:15:33.009239Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 06:15:41 up 58 min,  0 user,  load average: 4.56, 4.51, 2.95
	Linux newest-cni-218688 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f953c9c411dc66a0e299a0159b88b2797ec26bef99175c950d918713f0b5913c] <==
	I1210 06:15:34.948214       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 06:15:34.948536       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1210 06:15:34.948703       1 main.go:148] setting mtu 1500 for CNI 
	I1210 06:15:34.948722       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 06:15:34.948752       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T06:15:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 06:15:35.151700       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 06:15:35.151836       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 06:15:35.151913       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 06:15:35.152581       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 06:15:35.453290       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 06:15:35.453320       1 metrics.go:72] Registering metrics
	I1210 06:15:35.453363       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [e257321780848d3ffe855909a09763b99823d2b96edae7e378a5f63893b142e0] <==
	I1210 06:15:34.141918       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 06:15:34.141927       1 cache.go:39] Caches are synced for autoregister controller
	I1210 06:15:34.142126       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:34.142164       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1210 06:15:34.142654       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1210 06:15:34.143021       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1210 06:15:34.143043       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:34.150967       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1210 06:15:34.153482       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1210 06:15:34.185966       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1210 06:15:34.192716       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:34.192736       1 policy_source.go:248] refreshing policies
	I1210 06:15:34.284995       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:15:34.361788       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 06:15:34.441960       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 06:15:34.488729       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 06:15:34.509802       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:15:34.521754       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:15:34.560368       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.117.103"}
	I1210 06:15:34.571617       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.206.102"}
	I1210 06:15:35.044718       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1210 06:15:37.636255       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:15:37.636301       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:15:37.735805       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 06:15:37.886334       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [c5cec4543cf4d1934430ad2ce36ea404e30e808cc792d3f8c229bac5e073805b] <==
	I1210 06:15:37.295060       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.294794       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.295177       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.295186       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.295203       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.295219       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1210 06:15:37.295267       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.295332       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.295419       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.295872       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.299393       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.299443       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.299453       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.299531       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.299568       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.299924       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.300131       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.302207       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.302463       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.311949       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-218688"
	I1210 06:15:37.312007       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1210 06:15:37.395033       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.395101       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:37.395115       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1210 06:15:37.395121       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [ca0dbe21353818bffb1a564f8fa31f330d4f4bf2e79e1f937f38d09a263b6de9] <==
	I1210 06:15:34.735685       1 server_linux.go:53] "Using iptables proxy"
	I1210 06:15:34.793330       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:15:34.894482       1 shared_informer.go:377] "Caches are synced"
	I1210 06:15:34.894519       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1210 06:15:34.894620       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:15:34.912493       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:15:34.912570       1 server_linux.go:136] "Using iptables Proxier"
	I1210 06:15:34.917629       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:15:34.917956       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1210 06:15:34.917972       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:15:34.919010       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:15:34.919034       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:15:34.919406       1 config.go:200] "Starting service config controller"
	I1210 06:15:34.919131       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:15:34.919617       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:15:34.919621       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:15:34.920220       1 config.go:309] "Starting node config controller"
	I1210 06:15:34.920237       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:15:34.920245       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:15:35.019982       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 06:15:35.020004       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 06:15:35.020029       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [7e4d8e81695d5a307da305b92da855df1d3e4b373020a9cfdf7229f0fef82b24] <==
	I1210 06:15:33.125861       1 serving.go:386] Generated self-signed cert in-memory
	W1210 06:15:34.069318       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1210 06:15:34.069368       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1210 06:15:34.069381       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1210 06:15:34.069391       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1210 06:15:34.098099       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1210 06:15:34.098194       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:15:34.100305       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:15:34.100356       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:15:34.100456       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 06:15:34.100649       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1210 06:15:34.201192       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 10 06:15:34 newest-cni-218688 kubelet[679]: I1210 06:15:34.327269     679 apiserver.go:52] "Watching apiserver"
	Dec 10 06:15:34 newest-cni-218688 kubelet[679]: I1210 06:15:34.334165     679 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 10 06:15:34 newest-cni-218688 kubelet[679]: I1210 06:15:34.359074     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33becf6b-71b4-4682-81bc-c41d280389e3-lib-modules\") pod \"kindnet-n75st\" (UID: \"33becf6b-71b4-4682-81bc-c41d280389e3\") " pod="kube-system/kindnet-n75st"
	Dec 10 06:15:34 newest-cni-218688 kubelet[679]: I1210 06:15:34.359279     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33becf6b-71b4-4682-81bc-c41d280389e3-xtables-lock\") pod \"kindnet-n75st\" (UID: \"33becf6b-71b4-4682-81bc-c41d280389e3\") " pod="kube-system/kindnet-n75st"
	Dec 10 06:15:34 newest-cni-218688 kubelet[679]: I1210 06:15:34.359333     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ff684af-caff-4db8-991a-8ba99fe5f326-xtables-lock\") pod \"kube-proxy-tlj9s\" (UID: \"3ff684af-caff-4db8-991a-8ba99fe5f326\") " pod="kube-system/kube-proxy-tlj9s"
	Dec 10 06:15:34 newest-cni-218688 kubelet[679]: I1210 06:15:34.359355     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ff684af-caff-4db8-991a-8ba99fe5f326-lib-modules\") pod \"kube-proxy-tlj9s\" (UID: \"3ff684af-caff-4db8-991a-8ba99fe5f326\") " pod="kube-system/kube-proxy-tlj9s"
	Dec 10 06:15:34 newest-cni-218688 kubelet[679]: I1210 06:15:34.359455     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/33becf6b-71b4-4682-81bc-c41d280389e3-cni-cfg\") pod \"kindnet-n75st\" (UID: \"33becf6b-71b4-4682-81bc-c41d280389e3\") " pod="kube-system/kindnet-n75st"
	Dec 10 06:15:34 newest-cni-218688 kubelet[679]: I1210 06:15:34.368630     679 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-218688"
	Dec 10 06:15:34 newest-cni-218688 kubelet[679]: I1210 06:15:34.368881     679 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-218688"
	Dec 10 06:15:34 newest-cni-218688 kubelet[679]: E1210 06:15:34.369057     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-218688" containerName="kube-controller-manager"
	Dec 10 06:15:34 newest-cni-218688 kubelet[679]: I1210 06:15:34.369341     679 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-218688"
	Dec 10 06:15:34 newest-cni-218688 kubelet[679]: E1210 06:15:34.383219     679 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-218688\" already exists" pod="kube-system/kube-scheduler-newest-cni-218688"
	Dec 10 06:15:34 newest-cni-218688 kubelet[679]: E1210 06:15:34.383292     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-218688" containerName="kube-scheduler"
	Dec 10 06:15:34 newest-cni-218688 kubelet[679]: E1210 06:15:34.384445     679 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-218688\" already exists" pod="kube-system/kube-apiserver-newest-cni-218688"
	Dec 10 06:15:34 newest-cni-218688 kubelet[679]: E1210 06:15:34.384544     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-218688" containerName="kube-apiserver"
	Dec 10 06:15:34 newest-cni-218688 kubelet[679]: E1210 06:15:34.384672     679 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-218688\" already exists" pod="kube-system/etcd-newest-cni-218688"
	Dec 10 06:15:34 newest-cni-218688 kubelet[679]: E1210 06:15:34.384740     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-218688" containerName="etcd"
	Dec 10 06:15:35 newest-cni-218688 kubelet[679]: E1210 06:15:35.374327     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-218688" containerName="kube-scheduler"
	Dec 10 06:15:35 newest-cni-218688 kubelet[679]: E1210 06:15:35.374428     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-218688" containerName="kube-apiserver"
	Dec 10 06:15:35 newest-cni-218688 kubelet[679]: E1210 06:15:35.374545     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-218688" containerName="etcd"
	Dec 10 06:15:36 newest-cni-218688 kubelet[679]: E1210 06:15:36.900658     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-218688" containerName="kube-controller-manager"
	Dec 10 06:15:37 newest-cni-218688 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 06:15:37 newest-cni-218688 kubelet[679]: I1210 06:15:37.140183     679 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 10 06:15:37 newest-cni-218688 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 06:15:37 newest-cni-218688 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-218688 -n newest-cni-218688
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-218688 -n newest-cni-218688: exit status 2 (468.743186ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-218688 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-44pd7 storage-provisioner dashboard-metrics-scraper-867fb5f87b-6xnrs kubernetes-dashboard-b84665fb8-7lvwx
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-218688 describe pod coredns-7d764666f9-44pd7 storage-provisioner dashboard-metrics-scraper-867fb5f87b-6xnrs kubernetes-dashboard-b84665fb8-7lvwx
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-218688 describe pod coredns-7d764666f9-44pd7 storage-provisioner dashboard-metrics-scraper-867fb5f87b-6xnrs kubernetes-dashboard-b84665fb8-7lvwx: exit status 1 (70.881147ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-44pd7" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-6xnrs" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-7lvwx" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-218688 describe pod coredns-7d764666f9-44pd7 storage-provisioner dashboard-metrics-scraper-867fb5f87b-6xnrs kubernetes-dashboard-b84665fb8-7lvwx: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-468539 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-468539 --alsologtostderr -v=1: exit status 80 (2.175554012s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-468539 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:15:42.216382  402457 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:15:42.216808  402457 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:15:42.216847  402457 out.go:374] Setting ErrFile to fd 2...
	I1210 06:15:42.216868  402457 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:15:42.217912  402457 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 06:15:42.218704  402457 out.go:368] Setting JSON to false
	I1210 06:15:42.219152  402457 mustload.go:66] Loading cluster: no-preload-468539
	I1210 06:15:42.219869  402457 config.go:182] Loaded profile config "no-preload-468539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:15:42.220557  402457 cli_runner.go:164] Run: docker container inspect no-preload-468539 --format={{.State.Status}}
	I1210 06:15:42.248260  402457 host.go:66] Checking if "no-preload-468539" exists ...
	I1210 06:15:42.248610  402457 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:15:42.358576  402457 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-10 06:15:42.343121484 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:15:42.360118  402457 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765151505-21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765151505-21409-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-468539 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1210 06:15:42.362261  402457 out.go:179] * Pausing node no-preload-468539 ... 
	I1210 06:15:42.364198  402457 host.go:66] Checking if "no-preload-468539" exists ...
	I1210 06:15:42.364520  402457 ssh_runner.go:195] Run: systemctl --version
	I1210 06:15:42.364573  402457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-468539
	I1210 06:15:42.389908  402457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/no-preload-468539/id_rsa Username:docker}
	I1210 06:15:42.499723  402457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:15:42.528326  402457 pause.go:52] kubelet running: true
	I1210 06:15:42.528391  402457 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:15:42.723517  402457 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:15:42.723609  402457 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:15:42.812197  402457 cri.go:89] found id: "25a05e04e545d4fcac6f1f5ef4f9b0f774b269e137a1d704432557c704114a3d"
	I1210 06:15:42.812222  402457 cri.go:89] found id: "780f17677488d4e3d342bfbbfc968a945a742b5fb3447af9c987bb95762b3366"
	I1210 06:15:42.812228  402457 cri.go:89] found id: "32ee946f5e3f4d42656bffcedb68d2f90dfd63a4a50ee17ca9f5e5f823cabf61"
	I1210 06:15:42.812233  402457 cri.go:89] found id: "727c24c7f1527589ab0502be864047d735f1305e9896534e8e8fbe0d09f2be60"
	I1210 06:15:42.812237  402457 cri.go:89] found id: "befbdb3ebe2058e17934ebd0991371f1d2a7eff5a44d52842577b04f68e5366c"
	I1210 06:15:42.812243  402457 cri.go:89] found id: "986b3c2f0cda833eb6ebd4b6f5458a0e267bb8b83d3a119c68be6281e7585474"
	I1210 06:15:42.812247  402457 cri.go:89] found id: "87175e8498ad3223a893f9948444ea564e4f493dc0ce2a68eed9c2e36f356f00"
	I1210 06:15:42.812251  402457 cri.go:89] found id: "ec6692c835d1d4b482f3d9e22fd61d623beb739ec5760b5e0b356cba3798f5ef"
	I1210 06:15:42.812255  402457 cri.go:89] found id: "c134cc07c343ee0eec86fdc21ea9f07ab5dc05344377ced872b852a9c514a84c"
	I1210 06:15:42.812263  402457 cri.go:89] found id: "c8647f49d1ed1196648ae64e5bff3a7cae06e61954f6adebfbe20ab63be11c68"
	I1210 06:15:42.812268  402457 cri.go:89] found id: "12c94613744db32c3f84814a4c7492788abc23f15b7ed91e6947a03dfde75487"
	I1210 06:15:42.812272  402457 cri.go:89] found id: ""
	I1210 06:15:42.812317  402457 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:15:42.828994  402457 retry.go:31] will retry after 324.81731ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:15:42Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:15:43.154662  402457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:15:43.171155  402457 pause.go:52] kubelet running: false
	I1210 06:15:43.171222  402457 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:15:43.387118  402457 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:15:43.387215  402457 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:15:43.466384  402457 cri.go:89] found id: "25a05e04e545d4fcac6f1f5ef4f9b0f774b269e137a1d704432557c704114a3d"
	I1210 06:15:43.466413  402457 cri.go:89] found id: "780f17677488d4e3d342bfbbfc968a945a742b5fb3447af9c987bb95762b3366"
	I1210 06:15:43.466427  402457 cri.go:89] found id: "32ee946f5e3f4d42656bffcedb68d2f90dfd63a4a50ee17ca9f5e5f823cabf61"
	I1210 06:15:43.466433  402457 cri.go:89] found id: "727c24c7f1527589ab0502be864047d735f1305e9896534e8e8fbe0d09f2be60"
	I1210 06:15:43.466437  402457 cri.go:89] found id: "befbdb3ebe2058e17934ebd0991371f1d2a7eff5a44d52842577b04f68e5366c"
	I1210 06:15:43.466443  402457 cri.go:89] found id: "986b3c2f0cda833eb6ebd4b6f5458a0e267bb8b83d3a119c68be6281e7585474"
	I1210 06:15:43.466473  402457 cri.go:89] found id: "87175e8498ad3223a893f9948444ea564e4f493dc0ce2a68eed9c2e36f356f00"
	I1210 06:15:43.466481  402457 cri.go:89] found id: "ec6692c835d1d4b482f3d9e22fd61d623beb739ec5760b5e0b356cba3798f5ef"
	I1210 06:15:43.466487  402457 cri.go:89] found id: "c134cc07c343ee0eec86fdc21ea9f07ab5dc05344377ced872b852a9c514a84c"
	I1210 06:15:43.466509  402457 cri.go:89] found id: "c8647f49d1ed1196648ae64e5bff3a7cae06e61954f6adebfbe20ab63be11c68"
	I1210 06:15:43.466518  402457 cri.go:89] found id: "12c94613744db32c3f84814a4c7492788abc23f15b7ed91e6947a03dfde75487"
	I1210 06:15:43.466522  402457 cri.go:89] found id: ""
	I1210 06:15:43.466581  402457 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:15:43.479139  402457 retry.go:31] will retry after 492.835926ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:15:43Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:15:43.972894  402457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:15:43.987882  402457 pause.go:52] kubelet running: false
	I1210 06:15:43.987940  402457 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:15:44.180383  402457 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:15:44.180482  402457 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:15:44.261001  402457 cri.go:89] found id: "25a05e04e545d4fcac6f1f5ef4f9b0f774b269e137a1d704432557c704114a3d"
	I1210 06:15:44.261026  402457 cri.go:89] found id: "780f17677488d4e3d342bfbbfc968a945a742b5fb3447af9c987bb95762b3366"
	I1210 06:15:44.261032  402457 cri.go:89] found id: "32ee946f5e3f4d42656bffcedb68d2f90dfd63a4a50ee17ca9f5e5f823cabf61"
	I1210 06:15:44.261047  402457 cri.go:89] found id: "727c24c7f1527589ab0502be864047d735f1305e9896534e8e8fbe0d09f2be60"
	I1210 06:15:44.261053  402457 cri.go:89] found id: "befbdb3ebe2058e17934ebd0991371f1d2a7eff5a44d52842577b04f68e5366c"
	I1210 06:15:44.261058  402457 cri.go:89] found id: "986b3c2f0cda833eb6ebd4b6f5458a0e267bb8b83d3a119c68be6281e7585474"
	I1210 06:15:44.261063  402457 cri.go:89] found id: "87175e8498ad3223a893f9948444ea564e4f493dc0ce2a68eed9c2e36f356f00"
	I1210 06:15:44.261067  402457 cri.go:89] found id: "ec6692c835d1d4b482f3d9e22fd61d623beb739ec5760b5e0b356cba3798f5ef"
	I1210 06:15:44.261072  402457 cri.go:89] found id: "c134cc07c343ee0eec86fdc21ea9f07ab5dc05344377ced872b852a9c514a84c"
	I1210 06:15:44.261109  402457 cri.go:89] found id: "c8647f49d1ed1196648ae64e5bff3a7cae06e61954f6adebfbe20ab63be11c68"
	I1210 06:15:44.261115  402457 cri.go:89] found id: "12c94613744db32c3f84814a4c7492788abc23f15b7ed91e6947a03dfde75487"
	I1210 06:15:44.261120  402457 cri.go:89] found id: ""
	I1210 06:15:44.261165  402457 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:15:44.280893  402457 out.go:203] 
	W1210 06:15:44.281877  402457 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:15:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:15:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 06:15:44.281896  402457 out.go:285] * 
	* 
	W1210 06:15:44.287901  402457 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:15:44.289243  402457 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-468539 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-468539
helpers_test.go:244: (dbg) docker inspect no-preload-468539:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6169612bc56bab93835939c53ac02a13c50da032ef0d09bba72271c5ab86dd4f",
	        "Created": "2025-12-10T06:13:26.609062695Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 384016,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:14:41.687258095Z",
	            "FinishedAt": "2025-12-10T06:14:40.758585487Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/6169612bc56bab93835939c53ac02a13c50da032ef0d09bba72271c5ab86dd4f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6169612bc56bab93835939c53ac02a13c50da032ef0d09bba72271c5ab86dd4f/hostname",
	        "HostsPath": "/var/lib/docker/containers/6169612bc56bab93835939c53ac02a13c50da032ef0d09bba72271c5ab86dd4f/hosts",
	        "LogPath": "/var/lib/docker/containers/6169612bc56bab93835939c53ac02a13c50da032ef0d09bba72271c5ab86dd4f/6169612bc56bab93835939c53ac02a13c50da032ef0d09bba72271c5ab86dd4f-json.log",
	        "Name": "/no-preload-468539",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-468539:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-468539",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6169612bc56bab93835939c53ac02a13c50da032ef0d09bba72271c5ab86dd4f",
	                "LowerDir": "/var/lib/docker/overlay2/461fe4a5d9f098045f9eeb90a0afe8d126d8e281aa5837713c6a0ead57ebe0bd-init/diff:/var/lib/docker/overlay2/b62e2f8db4877fd6b32453256d2aeab173581bfdfbed6c87a5c3b6dd49dbb983/diff",
	                "MergedDir": "/var/lib/docker/overlay2/461fe4a5d9f098045f9eeb90a0afe8d126d8e281aa5837713c6a0ead57ebe0bd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/461fe4a5d9f098045f9eeb90a0afe8d126d8e281aa5837713c6a0ead57ebe0bd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/461fe4a5d9f098045f9eeb90a0afe8d126d8e281aa5837713c6a0ead57ebe0bd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-468539",
	                "Source": "/var/lib/docker/volumes/no-preload-468539/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-468539",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-468539",
	                "name.minikube.sigs.k8s.io": "no-preload-468539",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a00bc003909d2bfd167f1062dc84c481d7a9ef00f2292f27d73064cdf9f3c7aa",
	            "SandboxKey": "/var/run/docker/netns/a00bc003909d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-468539": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8043b90263214f4b2e6a8501c7af598190f163277d9c059bfe96da303e39ab18",
	                    "EndpointID": "0bde3d20b0c789c61b02c2fc87550f85e62756fab0fbec3945e9ec3aff55b2a4",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "b2:55:f0:60:5b:1d",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-468539",
	                        "6169612bc56b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-468539 -n no-preload-468539
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-468539 -n no-preload-468539: exit status 2 (384.469491ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-468539 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-468539 logs -n 25: (1.169656955s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ stop    │ -p no-preload-468539 --alsologtostderr -v=3                                                                                                                                                                                                        │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ addons  │ enable metrics-server -p embed-certs-028500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ stop    │ -p embed-certs-028500 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ addons  │ enable dashboard -p no-preload-468539 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ start   │ -p no-preload-468539 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:15 UTC │
	│ image   │ old-k8s-version-725426 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ pause   │ -p old-k8s-version-725426 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ delete  │ -p old-k8s-version-725426                                                                                                                                                                                                                          │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ delete  │ -p old-k8s-version-725426                                                                                                                                                                                                                          │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ start   │ -p newest-cni-218688 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable dashboard -p embed-certs-028500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ start   │ -p embed-certs-028500 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-125336 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-125336 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable metrics-server -p newest-cni-218688 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ stop    │ -p newest-cni-218688 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable dashboard -p newest-cni-218688 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ start   │ -p newest-cni-218688 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-125336 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ start   │ -p default-k8s-diff-port-125336 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ image   │ newest-cni-218688 image list --format=json                                                                                                                                                                                                         │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ pause   │ -p newest-cni-218688 --alsologtostderr -v=1                                                                                                                                                                                                        │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ image   │ no-preload-468539 image list --format=json                                                                                                                                                                                                         │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ pause   │ -p no-preload-468539 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ delete  │ -p newest-cni-218688                                                                                                                                                                                                                               │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:15:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:15:34.136263  398989 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:15:34.136365  398989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:15:34.136370  398989 out.go:374] Setting ErrFile to fd 2...
	I1210 06:15:34.136374  398989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:15:34.136589  398989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 06:15:34.137019  398989 out.go:368] Setting JSON to false
	I1210 06:15:34.138324  398989 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3478,"bootTime":1765343856,"procs":474,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:15:34.138383  398989 start.go:143] virtualization: kvm guest
	I1210 06:15:34.140369  398989 out.go:179] * [default-k8s-diff-port-125336] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:15:34.141455  398989 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:15:34.141495  398989 notify.go:221] Checking for updates...
	I1210 06:15:34.144149  398989 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:15:34.145219  398989 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:15:34.146212  398989 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube
	I1210 06:15:34.147189  398989 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:15:34.148570  398989 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:15:34.150487  398989 config.go:182] Loaded profile config "default-k8s-diff-port-125336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:15:34.151311  398989 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:15:34.181230  398989 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:15:34.181357  398989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:15:34.246485  398989 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 06:15:34.23498397 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:15:34.246649  398989 docker.go:319] overlay module found
	I1210 06:15:34.248892  398989 out.go:179] * Using the docker driver based on existing profile
	I1210 06:15:34.250044  398989 start.go:309] selected driver: docker
	I1210 06:15:34.250071  398989 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-125336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:15:34.250210  398989 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:15:34.250813  398989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:15:34.316341  398989 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 06:15:34.305292083 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:15:34.316682  398989 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:15:34.316710  398989 cni.go:84] Creating CNI manager for ""
	I1210 06:15:34.316776  398989 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:15:34.316830  398989 start.go:353] cluster config:
	{Name:default-k8s-diff-port-125336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:15:34.318321  398989 out.go:179] * Starting "default-k8s-diff-port-125336" primary control-plane node in "default-k8s-diff-port-125336" cluster
	I1210 06:15:34.319196  398989 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:15:34.320175  398989 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:15:34.321155  398989 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 06:15:34.321256  398989 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	W1210 06:15:34.344393  398989 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1210 06:15:34.347229  398989 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:15:34.347250  398989 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 06:15:34.430385  398989 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1210 06:15:34.430536  398989 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/config.json ...
	I1210 06:15:34.430685  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:34.430831  398989 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:15:34.430871  398989 start.go:360] acquireMachinesLock for default-k8s-diff-port-125336: {Name:mk1b9a5beba896eecc2201d27beab95b8159d676 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.430953  398989 start.go:364] duration metric: took 37.573µs to acquireMachinesLock for "default-k8s-diff-port-125336"
	I1210 06:15:34.430971  398989 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:15:34.430976  398989 fix.go:54] fixHost starting: 
	I1210 06:15:34.431250  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:34.454438  398989 fix.go:112] recreateIfNeeded on default-k8s-diff-port-125336: state=Stopped err=<nil>
	W1210 06:15:34.454482  398989 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:15:33.023453  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 06:15:33.023497  396996 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 06:15:33.023579  396996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:33.044470  396996 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:15:33.044498  396996 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:15:33.044561  396996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:33.055221  396996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:33.060071  396996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:33.070394  396996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:33.143159  396996 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:15:33.157435  396996 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:15:33.157507  396996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:15:33.170632  396996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:15:33.171889  396996 api_server.go:72] duration metric: took 184.694932ms to wait for apiserver process to appear ...
	I1210 06:15:33.171914  396996 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:15:33.171932  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:33.175983  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 06:15:33.176026  396996 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 06:15:33.187123  396996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:15:33.192327  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 06:15:33.192345  396996 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 06:15:33.208241  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 06:15:33.208263  396996 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 06:15:33.223466  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 06:15:33.223489  396996 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 06:15:33.239352  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 06:15:33.239373  396996 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 06:15:33.254731  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 06:15:33.254747  396996 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 06:15:33.268149  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 06:15:33.268164  396996 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 06:15:33.281962  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 06:15:33.281981  396996 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 06:15:33.294762  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:15:33.294777  396996 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 06:15:33.308261  396996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:15:34.066152  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 06:15:34.066176  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 06:15:34.066192  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:34.079065  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 06:15:34.079117  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 06:15:34.172751  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:34.179376  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:34.179407  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:34.672823  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:34.677978  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:34.678023  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:34.680262  396996 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.509569955s)
	I1210 06:15:34.680319  396996 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.493167455s)
	I1210 06:15:34.680472  396996 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.372172224s)
	I1210 06:15:34.684547  396996 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-218688 addons enable metrics-server
	
	I1210 06:15:34.693826  396996 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1210 06:15:34.695479  396996 addons.go:530] duration metric: took 1.708260214s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1210 06:15:35.172871  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:35.178128  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:35.178152  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:35.672391  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:35.676418  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1210 06:15:35.677341  396996 api_server.go:141] control plane version: v1.35.0-rc.1
	I1210 06:15:35.677363  396996 api_server.go:131] duration metric: took 2.505442988s to wait for apiserver health ...
	I1210 06:15:35.677373  396996 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:15:35.680615  396996 system_pods.go:59] 8 kube-system pods found
	I1210 06:15:35.680642  396996 system_pods.go:61] "coredns-7d764666f9-44pd7" [59f9ee36-231a-4116-a88e-60d48b054690] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1210 06:15:35.680651  396996 system_pods.go:61] "etcd-newest-cni-218688" [c27a2601-2917-44f3-966c-b554d5b92c02] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:15:35.680657  396996 system_pods.go:61] "kindnet-n75st" [33becf6b-71b4-4682-81bc-c41d280389e3] Running
	I1210 06:15:35.680665  396996 system_pods.go:61] "kube-apiserver-newest-cni-218688" [a423257c-9365-4560-865a-9de59f0aafeb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:15:35.680674  396996 system_pods.go:61] "kube-controller-manager-newest-cni-218688" [5a19eab1-194c-4d33-9aa6-5cce8ba87a10] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:15:35.680682  396996 system_pods.go:61] "kube-proxy-tlj9s" [3ff684af-caff-4db8-991a-8ba99fe5f326] Running
	I1210 06:15:35.680687  396996 system_pods.go:61] "kube-scheduler-newest-cni-218688" [8063cc2c-8c98-4490-94af-1613e4881229] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:15:35.680698  396996 system_pods.go:61] "storage-provisioner" [a10bfb27-694c-4654-a067-8f36fe743de7] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1210 06:15:35.680705  396996 system_pods.go:74] duration metric: took 3.328176ms to wait for pod list to return data ...
	I1210 06:15:35.680714  396996 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:15:35.682837  396996 default_sa.go:45] found service account: "default"
	I1210 06:15:35.682855  396996 default_sa.go:55] duration metric: took 2.134837ms for default service account to be created ...
	I1210 06:15:35.682865  396996 kubeadm.go:587] duration metric: took 2.695675575s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 06:15:35.682879  396996 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:15:35.684913  396996 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:15:35.684939  396996 node_conditions.go:123] node cpu capacity is 8
	I1210 06:15:35.684951  396996 node_conditions.go:105] duration metric: took 2.068174ms to run NodePressure ...
	I1210 06:15:35.684962  396996 start.go:242] waiting for startup goroutines ...
	I1210 06:15:35.684968  396996 start.go:247] waiting for cluster config update ...
	I1210 06:15:35.684977  396996 start.go:256] writing updated cluster config ...
	I1210 06:15:35.685255  396996 ssh_runner.go:195] Run: rm -f paused
	I1210 06:15:35.731197  396996 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-rc.1 (minor skew: 1)
	I1210 06:15:35.733185  396996 out.go:179] * Done! kubectl is now configured to use "newest-cni-218688" cluster and "default" namespace by default
	W1210 06:15:33.147258  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	W1210 06:15:35.148317  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	I1210 06:15:34.458179  398989 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-125336" ...
	I1210 06:15:34.458256  398989 cli_runner.go:164] Run: docker start default-k8s-diff-port-125336
	I1210 06:15:34.606122  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:34.751260  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:34.755727  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:34.772295  398989 kic.go:430] container "default-k8s-diff-port-125336" state is running.
	I1210 06:15:34.772778  398989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-125336
	I1210 06:15:34.795691  398989 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/config.json ...
	I1210 06:15:34.795975  398989 machine.go:94] provisionDockerMachine start ...
	I1210 06:15:34.796067  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:34.815579  398989 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:34.815958  398989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1210 06:15:34.815979  398989 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:15:34.816656  398989 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48068->127.0.0.1:33138: read: connection reset by peer
	I1210 06:15:34.895700  398989 cache.go:107] acquiring lock: {Name:mk0763a50664c56b0862900e71862307cba94d4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895740  398989 cache.go:107] acquiring lock: {Name:mkdd768341d1a3481ecaec697219b32d4a715834 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895735  398989 cache.go:107] acquiring lock: {Name:mkd670cede0997c7eb0e9bd388a82e1cb2741031 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895776  398989 cache.go:107] acquiring lock: {Name:mk4d792f4bac33dc8779d7cc5ff40393c94e0ea4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895776  398989 cache.go:107] acquiring lock: {Name:mkc3a95f67321b2fa8faeb966829fb60cf65d25d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895817  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:15:34.895824  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:15:34.895828  398989 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 146.45µs
	I1210 06:15:34.895834  398989 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 128.77µs
	I1210 06:15:34.895694  398989 cache.go:107] acquiring lock: {Name:mkcb073544c2d92de0e0765e38c37b4f4d2ac46b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895843  398989 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:15:34.895840  398989 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:15:34.895852  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 exists
	I1210 06:15:34.895700  398989 cache.go:107] acquiring lock: {Name:mk4839690ba979036496a7cee1de2814aaad3bf0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895863  398989 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3" took 181.132µs
	I1210 06:15:34.895880  398989 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 succeeded
	I1210 06:15:34.895908  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 exists
	I1210 06:15:34.895899  398989 cache.go:107] acquiring lock: {Name:mk796942baeaa838a47daad2be5ca7532234da42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895924  398989 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3" took 255.105µs
	I1210 06:15:34.895929  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 exists
	I1210 06:15:34.895932  398989 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 succeeded
	I1210 06:15:34.895908  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 exists
	I1210 06:15:34.895944  398989 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3" took 265.291µs
	I1210 06:15:34.895951  398989 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3" took 211.334µs
	I1210 06:15:34.895966  398989 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 succeeded
	I1210 06:15:34.895972  398989 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 succeeded
	I1210 06:15:34.895982  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1210 06:15:34.895990  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1210 06:15:34.895996  398989 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 258.502µs
	I1210 06:15:34.895999  398989 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 139.654µs
	I1210 06:15:34.896008  398989 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1210 06:15:34.896011  398989 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1210 06:15:34.896019  398989 cache.go:87] Successfully saved all images to host disk.
	I1210 06:15:37.959177  398989 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-125336
	
	I1210 06:15:37.959204  398989 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-125336"
	I1210 06:15:37.959258  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:37.979224  398989 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:37.979665  398989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1210 06:15:37.979696  398989 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-125336 && echo "default-k8s-diff-port-125336" | sudo tee /etc/hostname
	I1210 06:15:38.128128  398989 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-125336
	
	I1210 06:15:38.128197  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:38.146305  398989 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:38.146620  398989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1210 06:15:38.146653  398989 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-125336' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-125336/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-125336' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:15:38.278124  398989 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:15:38.278149  398989 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-5725/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-5725/.minikube}
	I1210 06:15:38.278167  398989 ubuntu.go:190] setting up certificates
	I1210 06:15:38.278176  398989 provision.go:84] configureAuth start
	I1210 06:15:38.278222  398989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-125336
	I1210 06:15:38.296606  398989 provision.go:143] copyHostCerts
	I1210 06:15:38.296674  398989 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem, removing ...
	I1210 06:15:38.296692  398989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem
	I1210 06:15:38.296785  398989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem (1078 bytes)
	I1210 06:15:38.296919  398989 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem, removing ...
	I1210 06:15:38.296932  398989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem
	I1210 06:15:38.296972  398989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem (1123 bytes)
	I1210 06:15:38.297072  398989 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem, removing ...
	I1210 06:15:38.297098  398989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem
	I1210 06:15:38.297140  398989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem (1679 bytes)
	I1210 06:15:38.297233  398989 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-125336 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-125336 localhost minikube]
	I1210 06:15:38.401725  398989 provision.go:177] copyRemoteCerts
	I1210 06:15:38.401781  398989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:15:38.401814  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:38.419489  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:38.515784  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:15:38.532680  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1210 06:15:38.549493  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:15:38.565601  398989 provision.go:87] duration metric: took 287.41ms to configureAuth
	I1210 06:15:38.565627  398989 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:15:38.565820  398989 config.go:182] Loaded profile config "default-k8s-diff-port-125336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:15:38.565943  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:38.583842  398989 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:38.584037  398989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1210 06:15:38.584055  398989 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:15:38.911289  398989 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:15:38.911317  398989 machine.go:97] duration metric: took 4.115324474s to provisionDockerMachine
	I1210 06:15:38.911331  398989 start.go:293] postStartSetup for "default-k8s-diff-port-125336" (driver="docker")
	I1210 06:15:38.911344  398989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:15:38.911417  398989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:15:38.911463  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:38.932694  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:39.032024  398989 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:15:39.035849  398989 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:15:39.035874  398989 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:15:39.035883  398989 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/addons for local assets ...
	I1210 06:15:39.035933  398989 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/files for local assets ...
	I1210 06:15:39.036028  398989 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem -> 92532.pem in /etc/ssl/certs
	I1210 06:15:39.036160  398989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:15:39.044513  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:15:39.061424  398989 start.go:296] duration metric: took 150.067555ms for postStartSetup
	I1210 06:15:39.061507  398989 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:15:39.061554  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:39.080318  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:39.174412  398989 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:15:39.179699  398989 fix.go:56] duration metric: took 4.748715142s for fixHost
	I1210 06:15:39.179726  398989 start.go:83] releasing machines lock for "default-k8s-diff-port-125336", held for 4.748759367s
	I1210 06:15:39.179795  398989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-125336
	I1210 06:15:39.198657  398989 ssh_runner.go:195] Run: cat /version.json
	I1210 06:15:39.198712  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:39.198747  398989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:15:39.198819  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:39.220204  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:39.220241  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:39.317475  398989 ssh_runner.go:195] Run: systemctl --version
	I1210 06:15:39.391108  398989 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:15:39.430876  398989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:15:39.435737  398989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:15:39.435812  398989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:15:39.444134  398989 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:15:39.444154  398989 start.go:496] detecting cgroup driver to use...
	I1210 06:15:39.444185  398989 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:15:39.444220  398989 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:15:39.458418  398989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:15:39.470158  398989 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:15:39.470210  398989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:15:39.485432  398989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:15:39.497705  398989 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:15:39.587848  398989 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:15:39.679325  398989 docker.go:234] disabling docker service ...
	I1210 06:15:39.679390  398989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:15:39.695744  398989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:15:39.710121  398989 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:15:39.803290  398989 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:15:39.889666  398989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:15:39.901841  398989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:15:39.916001  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:40.053859  398989 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:15:40.053907  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.064032  398989 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:15:40.064119  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.074052  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.082799  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.091069  398989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:15:40.099125  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.108348  398989 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.116442  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.124562  398989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:15:40.131659  398989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:15:40.139831  398989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:15:40.235238  398989 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:15:40.390045  398989 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:15:40.390127  398989 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:15:40.394019  398989 start.go:564] Will wait 60s for crictl version
	I1210 06:15:40.394073  398989 ssh_runner.go:195] Run: which crictl
	I1210 06:15:40.397521  398989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:15:40.422130  398989 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:15:40.422196  398989 ssh_runner.go:195] Run: crio --version
	I1210 06:15:40.449888  398989 ssh_runner.go:195] Run: crio --version
	I1210 06:15:40.482873  398989 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1210 06:15:40.484109  398989 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-125336 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:15:40.504017  398989 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1210 06:15:40.508495  398989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:15:40.519961  398989 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-125336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:15:40.520150  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:40.655009  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:40.788669  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:40.920137  398989 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 06:15:40.920210  398989 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:15:40.955931  398989 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:15:40.955957  398989 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:15:40.955966  398989 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.3 crio true true} ...
	I1210 06:15:40.956107  398989 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-125336 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:15:40.956192  398989 ssh_runner.go:195] Run: crio config
	I1210 06:15:41.004526  398989 cni.go:84] Creating CNI manager for ""
	I1210 06:15:41.004548  398989 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:15:41.004564  398989 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:15:41.004584  398989 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-125336 NodeName:default-k8s-diff-port-125336 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:15:41.004697  398989 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-125336"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:15:41.004752  398989 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1210 06:15:41.013662  398989 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:15:41.013711  398989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:15:41.021680  398989 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1210 06:15:41.034639  398989 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 06:15:41.047897  398989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1210 06:15:41.060681  398989 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:15:41.064298  398989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:15:41.074539  398989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:15:41.167815  398989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:15:41.192312  398989 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336 for IP: 192.168.103.2
	I1210 06:15:41.192334  398989 certs.go:195] generating shared ca certs ...
	I1210 06:15:41.192367  398989 certs.go:227] acquiring lock for ca certs: {Name:mka90f54d579d39a8508aa46a6cef002ccad5d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:41.192505  398989 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key
	I1210 06:15:41.192546  398989 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key
	I1210 06:15:41.192557  398989 certs.go:257] generating profile certs ...
	I1210 06:15:41.192643  398989 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/client.key
	I1210 06:15:41.192694  398989 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/apiserver.key.75b93134
	I1210 06:15:41.192729  398989 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/proxy-client.key
	I1210 06:15:41.192855  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253.pem (1338 bytes)
	W1210 06:15:41.192897  398989 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253_empty.pem, impossibly tiny 0 bytes
	I1210 06:15:41.192910  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:15:41.192952  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:15:41.192986  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:15:41.193016  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem (1679 bytes)
	I1210 06:15:41.193074  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:15:41.193841  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:15:41.212216  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:15:41.230779  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:15:41.249215  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:15:41.273141  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1210 06:15:41.291653  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:15:41.308892  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:15:41.328983  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 06:15:41.348815  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253.pem --> /usr/share/ca-certificates/9253.pem (1338 bytes)
	I1210 06:15:41.369178  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /usr/share/ca-certificates/92532.pem (1708 bytes)
	I1210 06:15:41.390044  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:15:41.407887  398989 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:15:41.422822  398989 ssh_runner.go:195] Run: openssl version
	I1210 06:15:41.430217  398989 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:41.438931  398989 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:15:41.447682  398989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:41.451942  398989 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:41.451995  398989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:41.496117  398989 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:15:41.504580  398989 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9253.pem
	I1210 06:15:41.512960  398989 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9253.pem /etc/ssl/certs/9253.pem
	I1210 06:15:41.521564  398989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9253.pem
	I1210 06:15:41.525244  398989 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:37 /usr/share/ca-certificates/9253.pem
	I1210 06:15:41.525308  398989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9253.pem
	I1210 06:15:41.564172  398989 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:15:41.572852  398989 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/92532.pem
	I1210 06:15:41.580900  398989 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/92532.pem /etc/ssl/certs/92532.pem
	I1210 06:15:41.588301  398989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92532.pem
	I1210 06:15:41.592675  398989 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:37 /usr/share/ca-certificates/92532.pem
	I1210 06:15:41.592721  398989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92532.pem
	I1210 06:15:41.637108  398989 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:15:41.645490  398989 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:15:41.649879  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:15:41.690638  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:15:41.747836  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:15:41.800228  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:15:41.862694  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:15:41.914250  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:15:41.958747  398989 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-125336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:15:41.959041  398989 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:15:41.959166  398989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:15:41.998590  398989 cri.go:89] found id: "92cdc11606d33aee3d477bf6cbe4ab80332206fde18c217d524f557e526b0285"
	I1210 06:15:41.998610  398989 cri.go:89] found id: "2dded97e81369efefb822c9b0c8d6dfd3bbd053fe93054ad3a81cdce1d76f368"
	I1210 06:15:41.998616  398989 cri.go:89] found id: "355b450a39b31a387be491afe63facd495d64617f6108b0a4b1b5123f1758d16"
	I1210 06:15:41.998621  398989 cri.go:89] found id: "4492dccb6c585536103a7303143f56d37e8a4fcd9cebebf3e45723b510e06b9d"
	I1210 06:15:41.998625  398989 cri.go:89] found id: ""
	I1210 06:15:41.998665  398989 ssh_runner.go:195] Run: sudo runc list -f json
	W1210 06:15:42.012230  398989 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:15:42Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:15:42.012308  398989 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:15:42.023047  398989 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:15:42.023062  398989 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:15:42.023133  398989 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:15:42.032028  398989 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:15:42.033327  398989 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-125336" does not appear in /home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:15:42.034299  398989 kubeconfig.go:62] /home/jenkins/minikube-integration/22094-5725/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-125336" cluster setting kubeconfig missing "default-k8s-diff-port-125336" context setting]
	I1210 06:15:42.035703  398989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/kubeconfig: {Name:mkfa60e97179780abb80cddf99075aed36301884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:42.037888  398989 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:15:42.047350  398989 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1210 06:15:42.047375  398989 kubeadm.go:602] duration metric: took 24.306597ms to restartPrimaryControlPlane
	I1210 06:15:42.047383  398989 kubeadm.go:403] duration metric: took 88.644178ms to StartCluster
	I1210 06:15:42.047399  398989 settings.go:142] acquiring lock: {Name:mk8c38e27b37253ca8cb2a2adf6342f0db270902 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:42.047471  398989 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:15:42.049858  398989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/kubeconfig: {Name:mkfa60e97179780abb80cddf99075aed36301884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:42.050141  398989 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:15:42.050363  398989 config.go:182] Loaded profile config "default-k8s-diff-port-125336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:15:42.050409  398989 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:15:42.050484  398989 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-125336"
	I1210 06:15:42.050502  398989 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-125336"
	W1210 06:15:42.050511  398989 addons.go:248] addon storage-provisioner should already be in state true
	I1210 06:15:42.050535  398989 host.go:66] Checking if "default-k8s-diff-port-125336" exists ...
	I1210 06:15:42.051015  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:42.051175  398989 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-125336"
	I1210 06:15:42.051191  398989 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-125336"
	W1210 06:15:42.051199  398989 addons.go:248] addon dashboard should already be in state true
	I1210 06:15:42.051223  398989 host.go:66] Checking if "default-k8s-diff-port-125336" exists ...
	I1210 06:15:42.051559  398989 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-125336"
	I1210 06:15:42.051618  398989 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-125336"
	I1210 06:15:42.051583  398989 out.go:179] * Verifying Kubernetes components...
	I1210 06:15:42.051661  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:42.051950  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:42.053296  398989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:15:42.082195  398989 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:15:42.082199  398989 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 06:15:42.083378  398989 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:15:42.083403  398989 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1210 06:15:37.646711  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	W1210 06:15:39.648042  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	W1210 06:15:41.648596  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	I1210 06:15:42.083414  398989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:15:42.083554  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:42.086520  398989 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-125336"
	W1210 06:15:42.086542  398989 addons.go:248] addon default-storageclass should already be in state true
	I1210 06:15:42.086569  398989 host.go:66] Checking if "default-k8s-diff-port-125336" exists ...
	I1210 06:15:42.086807  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 06:15:42.086824  398989 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 06:15:42.086879  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:42.088501  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:42.127971  398989 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:15:42.127995  398989 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:15:42.128058  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:42.131157  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:42.131148  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:42.163643  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:42.238425  398989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:15:42.261214  398989 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-125336" to be "Ready" ...
	I1210 06:15:42.266856  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 06:15:42.266878  398989 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 06:15:42.273292  398989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:15:42.296500  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 06:15:42.296642  398989 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 06:15:42.316168  398989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:15:42.322727  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 06:15:42.322747  398989 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 06:15:42.342110  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 06:15:42.342132  398989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 06:15:42.364017  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 06:15:42.364037  398989 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 06:15:42.383601  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 06:15:42.383628  398989 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 06:15:42.400222  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 06:15:42.400267  398989 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 06:15:42.413822  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 06:15:42.413841  398989 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 06:15:42.428985  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:15:42.429002  398989 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 06:15:42.445006  398989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:15:43.730716  398989 node_ready.go:49] node "default-k8s-diff-port-125336" is "Ready"
	I1210 06:15:43.730761  398989 node_ready.go:38] duration metric: took 1.469517861s for node "default-k8s-diff-port-125336" to be "Ready" ...
	I1210 06:15:43.730780  398989 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:15:43.730833  398989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:15:44.295467  398989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.022145188s)
	I1210 06:15:44.295527  398989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.979317742s)
	I1210 06:15:44.295605  398989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.850565953s)
	I1210 06:15:44.295731  398989 api_server.go:72] duration metric: took 2.245559846s to wait for apiserver process to appear ...
	I1210 06:15:44.295748  398989 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:15:44.295770  398989 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1210 06:15:44.297453  398989 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-125336 addons enable metrics-server
	
	I1210 06:15:44.301230  398989 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:44.301258  398989 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:44.307227  398989 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	
	
	==> CRI-O <==
	Dec 10 06:15:15 no-preload-468539 crio[565]: time="2025-12-10T06:15:15.678925704Z" level=info msg="Started container" PID=1733 containerID=c8af261928392d63f319119f75b653a03be259dca90cdca37ada5de84e0d7ee6 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5nqcq/dashboard-metrics-scraper id=73712bb3-4cb8-47c0-9feb-2feab5253356 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e6727c53c68d2c992cba4054dbdf81622ffc08b7306c127fbb56a5160c979dd7
	Dec 10 06:15:15 no-preload-468539 crio[565]: time="2025-12-10T06:15:15.726716536Z" level=info msg="Removing container: 1506bb38668897224f9ced2f4e8bdaf4c60f92f6b750cfe07e7e8dbbdfd3d49c" id=3b469644-143d-482c-a337-3f6a5d53ab4b name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:15:15 no-preload-468539 crio[565]: time="2025-12-10T06:15:15.738228056Z" level=info msg="Removed container 1506bb38668897224f9ced2f4e8bdaf4c60f92f6b750cfe07e7e8dbbdfd3d49c: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5nqcq/dashboard-metrics-scraper" id=3b469644-143d-482c-a337-3f6a5d53ab4b name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:15:22 no-preload-468539 crio[565]: time="2025-12-10T06:15:22.749029815Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=81195327-ba82-4ec7-99dd-50101f8067c7 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:15:22 no-preload-468539 crio[565]: time="2025-12-10T06:15:22.750046409Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1513c5b2-0cfd-4647-8887-243037221edc name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:15:22 no-preload-468539 crio[565]: time="2025-12-10T06:15:22.751116572Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=d6a37c20-3bfd-4c43-8680-04be99c41567 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:15:22 no-preload-468539 crio[565]: time="2025-12-10T06:15:22.751262037Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:22 no-preload-468539 crio[565]: time="2025-12-10T06:15:22.755825973Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:22 no-preload-468539 crio[565]: time="2025-12-10T06:15:22.756031761Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/dd69b0dd8a5baebc2cc2c78b1afc9eacc92bfd6dedb35fdc04251c36071ca1a2/merged/etc/passwd: no such file or directory"
	Dec 10 06:15:22 no-preload-468539 crio[565]: time="2025-12-10T06:15:22.756067796Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/dd69b0dd8a5baebc2cc2c78b1afc9eacc92bfd6dedb35fdc04251c36071ca1a2/merged/etc/group: no such file or directory"
	Dec 10 06:15:22 no-preload-468539 crio[565]: time="2025-12-10T06:15:22.756348049Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:22 no-preload-468539 crio[565]: time="2025-12-10T06:15:22.788237968Z" level=info msg="Created container 25a05e04e545d4fcac6f1f5ef4f9b0f774b269e137a1d704432557c704114a3d: kube-system/storage-provisioner/storage-provisioner" id=d6a37c20-3bfd-4c43-8680-04be99c41567 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:15:22 no-preload-468539 crio[565]: time="2025-12-10T06:15:22.788811605Z" level=info msg="Starting container: 25a05e04e545d4fcac6f1f5ef4f9b0f774b269e137a1d704432557c704114a3d" id=fa3a0345-b5d3-4587-b31d-069cbe302488 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:15:22 no-preload-468539 crio[565]: time="2025-12-10T06:15:22.790747124Z" level=info msg="Started container" PID=1747 containerID=25a05e04e545d4fcac6f1f5ef4f9b0f774b269e137a1d704432557c704114a3d description=kube-system/storage-provisioner/storage-provisioner id=fa3a0345-b5d3-4587-b31d-069cbe302488 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2eef2e1954424a7df92088db2d46bd8b6641aedc8d37aaa4c14ed5f7a23e8ebe
	Dec 10 06:15:39 no-preload-468539 crio[565]: time="2025-12-10T06:15:39.620872787Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=55e4be39-fe89-4202-bb61-639cc6b0e573 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:15:39 no-preload-468539 crio[565]: time="2025-12-10T06:15:39.622047683Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9fe26ab2-3493-4dd7-a7f5-c4558db65dd6 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:15:39 no-preload-468539 crio[565]: time="2025-12-10T06:15:39.623515086Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5nqcq/dashboard-metrics-scraper" id=f10116cf-d95f-4123-8ce8-75f4e1b2f865 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:15:39 no-preload-468539 crio[565]: time="2025-12-10T06:15:39.62366538Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:39 no-preload-468539 crio[565]: time="2025-12-10T06:15:39.629497691Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:39 no-preload-468539 crio[565]: time="2025-12-10T06:15:39.629989246Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:39 no-preload-468539 crio[565]: time="2025-12-10T06:15:39.672016424Z" level=info msg="Created container c8647f49d1ed1196648ae64e5bff3a7cae06e61954f6adebfbe20ab63be11c68: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5nqcq/dashboard-metrics-scraper" id=f10116cf-d95f-4123-8ce8-75f4e1b2f865 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:15:39 no-preload-468539 crio[565]: time="2025-12-10T06:15:39.672687004Z" level=info msg="Starting container: c8647f49d1ed1196648ae64e5bff3a7cae06e61954f6adebfbe20ab63be11c68" id=b16d02ae-ef9c-4d6f-bf7d-5204b661e0cb name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:15:39 no-preload-468539 crio[565]: time="2025-12-10T06:15:39.674828531Z" level=info msg="Started container" PID=1785 containerID=c8647f49d1ed1196648ae64e5bff3a7cae06e61954f6adebfbe20ab63be11c68 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5nqcq/dashboard-metrics-scraper id=b16d02ae-ef9c-4d6f-bf7d-5204b661e0cb name=/runtime.v1.RuntimeService/StartContainer sandboxID=e6727c53c68d2c992cba4054dbdf81622ffc08b7306c127fbb56a5160c979dd7
	Dec 10 06:15:39 no-preload-468539 crio[565]: time="2025-12-10T06:15:39.795014427Z" level=info msg="Removing container: c8af261928392d63f319119f75b653a03be259dca90cdca37ada5de84e0d7ee6" id=f3c63514-06eb-401b-b01f-15269e1beaf8 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:15:39 no-preload-468539 crio[565]: time="2025-12-10T06:15:39.805638062Z" level=info msg="Removed container c8af261928392d63f319119f75b653a03be259dca90cdca37ada5de84e0d7ee6: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5nqcq/dashboard-metrics-scraper" id=f3c63514-06eb-401b-b01f-15269e1beaf8 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	c8647f49d1ed1       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           5 seconds ago       Exited              dashboard-metrics-scraper   3                   e6727c53c68d2       dashboard-metrics-scraper-867fb5f87b-5nqcq   kubernetes-dashboard
	25a05e04e545d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   2eef2e1954424       storage-provisioner                          kube-system
	12c94613744db       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   6d4c98a0efbd0       kubernetes-dashboard-b84665fb8-lbt26         kubernetes-dashboard
	780f17677488d       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           53 seconds ago      Running             coredns                     0                   aa82329aff53f       coredns-7d764666f9-tnm7t                     kube-system
	222042b7b377d       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   c7afc995de963       busybox                                      default
	32ee946f5e3f4       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   b5686fb99c88c       kindnet-wqxf2                                kube-system
	727c24c7f1527       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                           53 seconds ago      Running             kube-proxy                  0                   962d973101a32       kube-proxy-ngf5r                             kube-system
	befbdb3ebe205       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   2eef2e1954424       storage-provisioner                          kube-system
	986b3c2f0cda8       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           56 seconds ago      Running             etcd                        0                   95e96ff4c8f4a       etcd-no-preload-468539                       kube-system
	87175e8498ad3       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                           56 seconds ago      Running             kube-scheduler              0                   306b145c1b33c       kube-scheduler-no-preload-468539             kube-system
	ec6692c835d1d       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                           56 seconds ago      Running             kube-controller-manager     0                   9f6751bcfc287       kube-controller-manager-no-preload-468539    kube-system
	c134cc07c343e       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                           56 seconds ago      Running             kube-apiserver              0                   f11720e704497       kube-apiserver-no-preload-468539             kube-system
	
	
	==> coredns [780f17677488d4e3d342bfbbfc968a945a742b5fb3447af9c987bb95762b3366] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:41645 - 56675 "HINFO IN 1064879011859968781.4748841350176556665. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.152384586s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-468539
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-468539
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602
	                    minikube.k8s.io/name=no-preload-468539
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_13_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:13:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-468539
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:15:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:15:22 +0000   Wed, 10 Dec 2025 06:13:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:15:22 +0000   Wed, 10 Dec 2025 06:13:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:15:22 +0000   Wed, 10 Dec 2025 06:13:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 06:15:22 +0000   Wed, 10 Dec 2025 06:14:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-468539
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                bc6f673e-f944-4d8e-86ab-fb27468ab4df
	  Boot ID:                    b1b789e7-29ca-41f0-9541-8c4ef16372aa
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-7d764666f9-tnm7t                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-no-preload-468539                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-wqxf2                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-no-preload-468539              250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-no-preload-468539     200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-ngf5r                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-no-preload-468539              100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-5nqcq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-lbt26          0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  111s  node-controller  Node no-preload-468539 event: Registered Node no-preload-468539 in Controller
	  Normal  RegisteredNode  51s   node-controller  Node no-preload-468539 event: Registered Node no-preload-468539 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e ac 6a 3a 10 14 08 06
	[  +0.000389] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e1 45 1e 59 dc 08 06
	[ +12.231886] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff aa b6 c3 b5 b8 e1 08 06
	[  +0.018522] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff f6 5b 96 ba 91 6c 08 06
	[Dec10 06:13] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 91 ba 36 9a ae 08 06
	[  +0.002987] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 7f a1 c5 f7 73 08 06
	[  +1.205570] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 04 b2 ab d7 49 08 06
	[  +4.623767] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 10 2d 23 5f e6 08 06
	[  +0.000315] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 5b 96 ba 91 6c 08 06
	[ +12.537493] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 fa d0 2a 46 66 08 06
	[  +0.000395] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 16 04 b2 ab d7 49 08 06
	[ +31.413502] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 1b 61 8f e3 57 08 06
	[  +0.000352] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fa 91 ba 36 9a ae 08 06
	
	
	==> etcd [986b3c2f0cda833eb6ebd4b6f5458a0e267bb8b83d3a119c68be6281e7585474] <==
	{"level":"info","ts":"2025-12-10T06:14:49.274144Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-10T06:14:49.274479Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-10T06:14:49.274523Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-10T06:14:49.273402Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-10T06:14:49.853814Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-10T06:14:49.853924Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-10T06:14:49.853998Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-10T06:14:49.854051Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-10T06:14:49.854096Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-12-10T06:14:49.855061Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-10T06:14:49.855178Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-10T06:14:49.855343Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-12-10T06:14:49.855382Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-10T06:14:49.858376Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:no-preload-468539 ClientURLs:[https://192.168.94.2:2379]}","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-10T06:14:49.858517Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-10T06:14:49.858507Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-10T06:14:49.860576Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-10T06:14:49.860808Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-10T06:14:49.861005Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-10T06:14:49.861367Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-10T06:14:49.865399Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-10T06:14:49.865436Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"warn","ts":"2025-12-10T06:15:01.375636Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.815671ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766735193024483 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-zvaolmidp565taklyuh7zybt5e\" mod_revision:461 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-zvaolmidp565taklyuh7zybt5e\" value_size:608 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-zvaolmidp565taklyuh7zybt5e\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-10T06:15:01.375917Z","caller":"traceutil/trace.go:172","msg":"trace[1812910808] transaction","detail":"{read_only:false; response_revision:590; number_of_response:1; }","duration":"207.267747ms","start":"2025-12-10T06:15:01.168635Z","end":"2025-12-10T06:15:01.375903Z","steps":["trace[1812910808] 'process raft request'  (duration: 207.187415ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T06:15:01.375948Z","caller":"traceutil/trace.go:172","msg":"trace[225420387] transaction","detail":"{read_only:false; response_revision:589; number_of_response:1; }","duration":"237.159161ms","start":"2025-12-10T06:15:01.138776Z","end":"2025-12-10T06:15:01.375935Z","steps":["trace[225420387] 'process raft request'  (duration: 108.562159ms)","trace[225420387] 'compare'  (duration: 127.681702ms)"],"step_count":2}
	
	
	==> kernel <==
	 06:15:45 up 58 min,  0 user,  load average: 4.75, 4.55, 2.97
	Linux no-preload-468539 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [32ee946f5e3f4d42656bffcedb68d2f90dfd63a4a50ee17ca9f5e5f823cabf61] <==
	I1210 06:14:52.232582       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 06:14:52.232883       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1210 06:14:52.233091       1 main.go:148] setting mtu 1500 for CNI 
	I1210 06:14:52.233114       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 06:14:52.233130       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T06:14:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 06:14:52.436685       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 06:14:52.436720       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 06:14:52.436746       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 06:14:52.436887       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 06:14:52.837481       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 06:14:52.837511       1 metrics.go:72] Registering metrics
	I1210 06:14:52.837589       1 controller.go:711] "Syncing nftables rules"
	I1210 06:15:02.436564       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1210 06:15:02.436628       1 main.go:301] handling current node
	I1210 06:15:12.437322       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1210 06:15:12.437379       1 main.go:301] handling current node
	I1210 06:15:22.436857       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1210 06:15:22.436909       1 main.go:301] handling current node
	I1210 06:15:32.440167       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1210 06:15:32.440206       1 main.go:301] handling current node
	I1210 06:15:42.438213       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1210 06:15:42.438249       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c134cc07c343ee0eec86fdc21ea9f07ab5dc05344377ced872b852a9c514a84c] <==
	I1210 06:14:50.929359       1 autoregister_controller.go:144] Starting autoregister controller
	I1210 06:14:50.929365       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 06:14:50.929371       1 cache.go:39] Caches are synced for autoregister controller
	I1210 06:14:50.929538       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1210 06:14:50.929588       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1210 06:14:50.930113       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:50.930136       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:50.932549       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:50.935474       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1210 06:14:50.937394       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1210 06:14:50.986701       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1210 06:14:50.992188       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:50.992259       1 policy_source.go:248] refreshing policies
	I1210 06:14:51.003144       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:14:51.240275       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 06:14:51.266624       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 06:14:51.282891       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:14:51.289672       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:14:51.296289       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 06:14:51.332554       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.120.186"}
	I1210 06:14:51.345973       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.249.195"}
	I1210 06:14:51.832196       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1210 06:14:54.515423       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 06:14:54.562558       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:14:54.612861       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [ec6692c835d1d4b482f3d9e22fd61d623beb739ec5760b5e0b356cba3798f5ef] <==
	I1210 06:14:54.066861       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1210 06:14:54.066912       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.066424       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.067110       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.067404       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.067473       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.067485       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.067455       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.067614       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.067440       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.067463       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.067924       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.067933       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.068020       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.068044       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.068162       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.068163       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.068555       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.068634       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.072572       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.073283       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:14:54.166580       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.166597       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1210 06:14:54.166601       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1210 06:14:54.173546       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [727c24c7f1527589ab0502be864047d735f1305e9896534e8e8fbe0d09f2be60] <==
	I1210 06:14:52.055494       1 server_linux.go:53] "Using iptables proxy"
	I1210 06:14:52.126697       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:14:52.226870       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:52.226900       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1210 06:14:52.227007       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:14:52.245676       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:14:52.245737       1 server_linux.go:136] "Using iptables Proxier"
	I1210 06:14:52.251011       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:14:52.251369       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1210 06:14:52.251387       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:14:52.252803       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:14:52.252827       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:14:52.252849       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:14:52.252854       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:14:52.252879       1 config.go:309] "Starting node config controller"
	I1210 06:14:52.252888       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:14:52.252895       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:14:52.252938       1 config.go:200] "Starting service config controller"
	I1210 06:14:52.252970       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:14:52.353876       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 06:14:52.353915       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 06:14:52.354267       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [87175e8498ad3223a893f9948444ea564e4f493dc0ce2a68eed9c2e36f356f00] <==
	I1210 06:14:49.479979       1 serving.go:386] Generated self-signed cert in-memory
	W1210 06:14:50.856009       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1210 06:14:50.856144       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1210 06:14:50.856164       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1210 06:14:50.856174       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1210 06:14:50.904723       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1210 06:14:50.904765       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:14:50.908046       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 06:14:50.908175       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:14:50.908190       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:14:50.908210       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1210 06:14:51.008620       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 10 06:15:07 no-preload-468539 kubelet[715]: E1210 06:15:07.959257     715 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-468539" containerName="kube-apiserver"
	Dec 10 06:15:08 no-preload-468539 kubelet[715]: E1210 06:15:08.704670     715 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-468539" containerName="kube-apiserver"
	Dec 10 06:15:08 no-preload-468539 kubelet[715]: E1210 06:15:08.858314     715 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-468539" containerName="kube-scheduler"
	Dec 10 06:15:09 no-preload-468539 kubelet[715]: E1210 06:15:09.706572     715 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-468539" containerName="kube-scheduler"
	Dec 10 06:15:15 no-preload-468539 kubelet[715]: E1210 06:15:15.617945     715 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5nqcq" containerName="dashboard-metrics-scraper"
	Dec 10 06:15:15 no-preload-468539 kubelet[715]: I1210 06:15:15.617987     715 scope.go:122] "RemoveContainer" containerID="1506bb38668897224f9ced2f4e8bdaf4c60f92f6b750cfe07e7e8dbbdfd3d49c"
	Dec 10 06:15:15 no-preload-468539 kubelet[715]: I1210 06:15:15.725286     715 scope.go:122] "RemoveContainer" containerID="1506bb38668897224f9ced2f4e8bdaf4c60f92f6b750cfe07e7e8dbbdfd3d49c"
	Dec 10 06:15:15 no-preload-468539 kubelet[715]: E1210 06:15:15.725569     715 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5nqcq" containerName="dashboard-metrics-scraper"
	Dec 10 06:15:15 no-preload-468539 kubelet[715]: I1210 06:15:15.725601     715 scope.go:122] "RemoveContainer" containerID="c8af261928392d63f319119f75b653a03be259dca90cdca37ada5de84e0d7ee6"
	Dec 10 06:15:15 no-preload-468539 kubelet[715]: E1210 06:15:15.725793     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-5nqcq_kubernetes-dashboard(35ff61a3-2a03-4755-80ce-5f439a59c6db)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5nqcq" podUID="35ff61a3-2a03-4755-80ce-5f439a59c6db"
	Dec 10 06:15:17 no-preload-468539 kubelet[715]: E1210 06:15:17.468418     715 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5nqcq" containerName="dashboard-metrics-scraper"
	Dec 10 06:15:17 no-preload-468539 kubelet[715]: I1210 06:15:17.468458     715 scope.go:122] "RemoveContainer" containerID="c8af261928392d63f319119f75b653a03be259dca90cdca37ada5de84e0d7ee6"
	Dec 10 06:15:17 no-preload-468539 kubelet[715]: E1210 06:15:17.468690     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-5nqcq_kubernetes-dashboard(35ff61a3-2a03-4755-80ce-5f439a59c6db)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5nqcq" podUID="35ff61a3-2a03-4755-80ce-5f439a59c6db"
	Dec 10 06:15:22 no-preload-468539 kubelet[715]: I1210 06:15:22.748556     715 scope.go:122] "RemoveContainer" containerID="befbdb3ebe2058e17934ebd0991371f1d2a7eff5a44d52842577b04f68e5366c"
	Dec 10 06:15:28 no-preload-468539 kubelet[715]: E1210 06:15:28.263549     715 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-tnm7t" containerName="coredns"
	Dec 10 06:15:39 no-preload-468539 kubelet[715]: E1210 06:15:39.618519     715 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5nqcq" containerName="dashboard-metrics-scraper"
	Dec 10 06:15:39 no-preload-468539 kubelet[715]: I1210 06:15:39.618573     715 scope.go:122] "RemoveContainer" containerID="c8af261928392d63f319119f75b653a03be259dca90cdca37ada5de84e0d7ee6"
	Dec 10 06:15:39 no-preload-468539 kubelet[715]: I1210 06:15:39.793634     715 scope.go:122] "RemoveContainer" containerID="c8af261928392d63f319119f75b653a03be259dca90cdca37ada5de84e0d7ee6"
	Dec 10 06:15:39 no-preload-468539 kubelet[715]: E1210 06:15:39.793933     715 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5nqcq" containerName="dashboard-metrics-scraper"
	Dec 10 06:15:39 no-preload-468539 kubelet[715]: I1210 06:15:39.793975     715 scope.go:122] "RemoveContainer" containerID="c8647f49d1ed1196648ae64e5bff3a7cae06e61954f6adebfbe20ab63be11c68"
	Dec 10 06:15:39 no-preload-468539 kubelet[715]: E1210 06:15:39.794203     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-5nqcq_kubernetes-dashboard(35ff61a3-2a03-4755-80ce-5f439a59c6db)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5nqcq" podUID="35ff61a3-2a03-4755-80ce-5f439a59c6db"
	Dec 10 06:15:42 no-preload-468539 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 06:15:42 no-preload-468539 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 06:15:42 no-preload-468539 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:15:42 no-preload-468539 systemd[1]: kubelet.service: Consumed 1.699s CPU time.
	
	
	==> kubernetes-dashboard [12c94613744db32c3f84814a4c7492788abc23f15b7ed91e6947a03dfde75487] <==
	2025/12/10 06:14:58 Using namespace: kubernetes-dashboard
	2025/12/10 06:14:58 Using in-cluster config to connect to apiserver
	2025/12/10 06:14:58 Using secret token for csrf signing
	2025/12/10 06:14:58 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/10 06:14:58 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/10 06:14:58 Successful initial request to the apiserver, version: v1.35.0-rc.1
	2025/12/10 06:14:58 Generating JWE encryption key
	2025/12/10 06:14:58 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/10 06:14:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/10 06:14:58 Initializing JWE encryption key from synchronized object
	2025/12/10 06:14:58 Creating in-cluster Sidecar client
	2025/12/10 06:14:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 06:14:58 Serving insecurely on HTTP port: 9090
	2025/12/10 06:15:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 06:14:58 Starting overwatch
	
	
	==> storage-provisioner [25a05e04e545d4fcac6f1f5ef4f9b0f774b269e137a1d704432557c704114a3d] <==
	I1210 06:15:22.802409       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 06:15:22.809788       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 06:15:22.809842       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1210 06:15:22.811715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:26.265844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:30.527725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:34.126457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:37.181039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:40.203552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:40.208311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:15:40.208523       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 06:15:40.208713       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-468539_e02368c8-1d6c-466e-8afe-545dd566e201!
	I1210 06:15:40.209173       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3bd6fb81-e34e-4509-b3cd-dcebd24f16e8", APIVersion:"v1", ResourceVersion:"648", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-468539_e02368c8-1d6c-466e-8afe-545dd566e201 became leader
	W1210 06:15:40.210883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:40.214648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:15:40.309932       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-468539_e02368c8-1d6c-466e-8afe-545dd566e201!
	W1210 06:15:42.218607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:42.227484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:44.232092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:44.237466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [befbdb3ebe2058e17934ebd0991371f1d2a7eff5a44d52842577b04f68e5366c] <==
	I1210 06:14:52.019641       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1210 06:15:22.023050       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-468539 -n no-preload-468539
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-468539 -n no-preload-468539: exit status 2 (318.144877ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-468539 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-468539
helpers_test.go:244: (dbg) docker inspect no-preload-468539:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6169612bc56bab93835939c53ac02a13c50da032ef0d09bba72271c5ab86dd4f",
	        "Created": "2025-12-10T06:13:26.609062695Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 384016,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:14:41.687258095Z",
	            "FinishedAt": "2025-12-10T06:14:40.758585487Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/6169612bc56bab93835939c53ac02a13c50da032ef0d09bba72271c5ab86dd4f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6169612bc56bab93835939c53ac02a13c50da032ef0d09bba72271c5ab86dd4f/hostname",
	        "HostsPath": "/var/lib/docker/containers/6169612bc56bab93835939c53ac02a13c50da032ef0d09bba72271c5ab86dd4f/hosts",
	        "LogPath": "/var/lib/docker/containers/6169612bc56bab93835939c53ac02a13c50da032ef0d09bba72271c5ab86dd4f/6169612bc56bab93835939c53ac02a13c50da032ef0d09bba72271c5ab86dd4f-json.log",
	        "Name": "/no-preload-468539",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-468539:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-468539",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6169612bc56bab93835939c53ac02a13c50da032ef0d09bba72271c5ab86dd4f",
	                "LowerDir": "/var/lib/docker/overlay2/461fe4a5d9f098045f9eeb90a0afe8d126d8e281aa5837713c6a0ead57ebe0bd-init/diff:/var/lib/docker/overlay2/b62e2f8db4877fd6b32453256d2aeab173581bfdfbed6c87a5c3b6dd49dbb983/diff",
	                "MergedDir": "/var/lib/docker/overlay2/461fe4a5d9f098045f9eeb90a0afe8d126d8e281aa5837713c6a0ead57ebe0bd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/461fe4a5d9f098045f9eeb90a0afe8d126d8e281aa5837713c6a0ead57ebe0bd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/461fe4a5d9f098045f9eeb90a0afe8d126d8e281aa5837713c6a0ead57ebe0bd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-468539",
	                "Source": "/var/lib/docker/volumes/no-preload-468539/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-468539",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-468539",
	                "name.minikube.sigs.k8s.io": "no-preload-468539",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a00bc003909d2bfd167f1062dc84c481d7a9ef00f2292f27d73064cdf9f3c7aa",
	            "SandboxKey": "/var/run/docker/netns/a00bc003909d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-468539": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8043b90263214f4b2e6a8501c7af598190f163277d9c059bfe96da303e39ab18",
	                    "EndpointID": "0bde3d20b0c789c61b02c2fc87550f85e62756fab0fbec3945e9ec3aff55b2a4",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "b2:55:f0:60:5b:1d",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-468539",
	                        "6169612bc56b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-468539 -n no-preload-468539
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-468539 -n no-preload-468539: exit status 2 (312.36568ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-468539 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-468539 logs -n 25: (1.082185335s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ addons  │ enable metrics-server -p embed-certs-028500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ stop    │ -p embed-certs-028500 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ addons  │ enable dashboard -p no-preload-468539 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ start   │ -p no-preload-468539 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:15 UTC │
	│ image   │ old-k8s-version-725426 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ pause   │ -p old-k8s-version-725426 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ delete  │ -p old-k8s-version-725426                                                                                                                                                                                                                          │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ delete  │ -p old-k8s-version-725426                                                                                                                                                                                                                          │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ start   │ -p newest-cni-218688 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable dashboard -p embed-certs-028500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ start   │ -p embed-certs-028500 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-125336 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-125336 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable metrics-server -p newest-cni-218688 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ stop    │ -p newest-cni-218688 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable dashboard -p newest-cni-218688 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ start   │ -p newest-cni-218688 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-125336 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ start   │ -p default-k8s-diff-port-125336 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ image   │ newest-cni-218688 image list --format=json                                                                                                                                                                                                         │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ pause   │ -p newest-cni-218688 --alsologtostderr -v=1                                                                                                                                                                                                        │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ image   │ no-preload-468539 image list --format=json                                                                                                                                                                                                         │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ pause   │ -p no-preload-468539 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ delete  │ -p newest-cni-218688                                                                                                                                                                                                                               │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ delete  │ -p newest-cni-218688                                                                                                                                                                                                                               │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:15:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:15:34.136263  398989 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:15:34.136365  398989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:15:34.136370  398989 out.go:374] Setting ErrFile to fd 2...
	I1210 06:15:34.136374  398989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:15:34.136589  398989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 06:15:34.137019  398989 out.go:368] Setting JSON to false
	I1210 06:15:34.138324  398989 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3478,"bootTime":1765343856,"procs":474,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:15:34.138383  398989 start.go:143] virtualization: kvm guest
	I1210 06:15:34.140369  398989 out.go:179] * [default-k8s-diff-port-125336] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:15:34.141455  398989 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:15:34.141495  398989 notify.go:221] Checking for updates...
	I1210 06:15:34.144149  398989 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:15:34.145219  398989 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:15:34.146212  398989 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube
	I1210 06:15:34.147189  398989 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:15:34.148570  398989 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:15:34.150487  398989 config.go:182] Loaded profile config "default-k8s-diff-port-125336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:15:34.151311  398989 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:15:34.181230  398989 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:15:34.181357  398989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:15:34.246485  398989 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 06:15:34.23498397 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:15:34.246649  398989 docker.go:319] overlay module found
	I1210 06:15:34.248892  398989 out.go:179] * Using the docker driver based on existing profile
	I1210 06:15:34.250044  398989 start.go:309] selected driver: docker
	I1210 06:15:34.250071  398989 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-125336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:15:34.250210  398989 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:15:34.250813  398989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:15:34.316341  398989 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 06:15:34.305292083 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:15:34.316682  398989 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:15:34.316710  398989 cni.go:84] Creating CNI manager for ""
	I1210 06:15:34.316776  398989 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:15:34.316830  398989 start.go:353] cluster config:
	{Name:default-k8s-diff-port-125336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:15:34.318321  398989 out.go:179] * Starting "default-k8s-diff-port-125336" primary control-plane node in "default-k8s-diff-port-125336" cluster
	I1210 06:15:34.319196  398989 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:15:34.320175  398989 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:15:34.321155  398989 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 06:15:34.321256  398989 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	W1210 06:15:34.344393  398989 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1210 06:15:34.347229  398989 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:15:34.347250  398989 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 06:15:34.430385  398989 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1210 06:15:34.430536  398989 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/config.json ...
	I1210 06:15:34.430685  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:34.430831  398989 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:15:34.430871  398989 start.go:360] acquireMachinesLock for default-k8s-diff-port-125336: {Name:mk1b9a5beba896eecc2201d27beab95b8159d676 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.430953  398989 start.go:364] duration metric: took 37.573µs to acquireMachinesLock for "default-k8s-diff-port-125336"
	I1210 06:15:34.430971  398989 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:15:34.430976  398989 fix.go:54] fixHost starting: 
	I1210 06:15:34.431250  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:34.454438  398989 fix.go:112] recreateIfNeeded on default-k8s-diff-port-125336: state=Stopped err=<nil>
	W1210 06:15:34.454482  398989 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:15:33.023453  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 06:15:33.023497  396996 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 06:15:33.023579  396996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:33.044470  396996 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:15:33.044498  396996 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:15:33.044561  396996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:33.055221  396996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:33.060071  396996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:33.070394  396996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:33.143159  396996 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:15:33.157435  396996 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:15:33.157507  396996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:15:33.170632  396996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:15:33.171889  396996 api_server.go:72] duration metric: took 184.694932ms to wait for apiserver process to appear ...
	I1210 06:15:33.171914  396996 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:15:33.171932  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:33.175983  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 06:15:33.176026  396996 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 06:15:33.187123  396996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:15:33.192327  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 06:15:33.192345  396996 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 06:15:33.208241  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 06:15:33.208263  396996 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 06:15:33.223466  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 06:15:33.223489  396996 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 06:15:33.239352  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 06:15:33.239373  396996 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 06:15:33.254731  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 06:15:33.254747  396996 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 06:15:33.268149  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 06:15:33.268164  396996 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 06:15:33.281962  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 06:15:33.281981  396996 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 06:15:33.294762  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:15:33.294777  396996 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 06:15:33.308261  396996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:15:34.066152  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 06:15:34.066176  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 06:15:34.066192  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:34.079065  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 06:15:34.079117  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 06:15:34.172751  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:34.179376  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:34.179407  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:34.672823  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:34.677978  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:34.678023  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:34.680262  396996 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.509569955s)
	I1210 06:15:34.680319  396996 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.493167455s)
	I1210 06:15:34.680472  396996 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.372172224s)
	I1210 06:15:34.684547  396996 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-218688 addons enable metrics-server
	
	I1210 06:15:34.693826  396996 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1210 06:15:34.695479  396996 addons.go:530] duration metric: took 1.708260214s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1210 06:15:35.172871  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:35.178128  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:35.178152  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:35.672391  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:35.676418  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1210 06:15:35.677341  396996 api_server.go:141] control plane version: v1.35.0-rc.1
	I1210 06:15:35.677363  396996 api_server.go:131] duration metric: took 2.505442988s to wait for apiserver health ...
	I1210 06:15:35.677373  396996 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:15:35.680615  396996 system_pods.go:59] 8 kube-system pods found
	I1210 06:15:35.680642  396996 system_pods.go:61] "coredns-7d764666f9-44pd7" [59f9ee36-231a-4116-a88e-60d48b054690] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1210 06:15:35.680651  396996 system_pods.go:61] "etcd-newest-cni-218688" [c27a2601-2917-44f3-966c-b554d5b92c02] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:15:35.680657  396996 system_pods.go:61] "kindnet-n75st" [33becf6b-71b4-4682-81bc-c41d280389e3] Running
	I1210 06:15:35.680665  396996 system_pods.go:61] "kube-apiserver-newest-cni-218688" [a423257c-9365-4560-865a-9de59f0aafeb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:15:35.680674  396996 system_pods.go:61] "kube-controller-manager-newest-cni-218688" [5a19eab1-194c-4d33-9aa6-5cce8ba87a10] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:15:35.680682  396996 system_pods.go:61] "kube-proxy-tlj9s" [3ff684af-caff-4db8-991a-8ba99fe5f326] Running
	I1210 06:15:35.680687  396996 system_pods.go:61] "kube-scheduler-newest-cni-218688" [8063cc2c-8c98-4490-94af-1613e4881229] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:15:35.680698  396996 system_pods.go:61] "storage-provisioner" [a10bfb27-694c-4654-a067-8f36fe743de7] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1210 06:15:35.680705  396996 system_pods.go:74] duration metric: took 3.328176ms to wait for pod list to return data ...
	I1210 06:15:35.680714  396996 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:15:35.682837  396996 default_sa.go:45] found service account: "default"
	I1210 06:15:35.682855  396996 default_sa.go:55] duration metric: took 2.134837ms for default service account to be created ...
	I1210 06:15:35.682865  396996 kubeadm.go:587] duration metric: took 2.695675575s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 06:15:35.682879  396996 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:15:35.684913  396996 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:15:35.684939  396996 node_conditions.go:123] node cpu capacity is 8
	I1210 06:15:35.684951  396996 node_conditions.go:105] duration metric: took 2.068174ms to run NodePressure ...
	I1210 06:15:35.684962  396996 start.go:242] waiting for startup goroutines ...
	I1210 06:15:35.684968  396996 start.go:247] waiting for cluster config update ...
	I1210 06:15:35.684977  396996 start.go:256] writing updated cluster config ...
	I1210 06:15:35.685255  396996 ssh_runner.go:195] Run: rm -f paused
	I1210 06:15:35.731197  396996 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-rc.1 (minor skew: 1)
	I1210 06:15:35.733185  396996 out.go:179] * Done! kubectl is now configured to use "newest-cni-218688" cluster and "default" namespace by default
	W1210 06:15:33.147258  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	W1210 06:15:35.148317  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	I1210 06:15:34.458179  398989 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-125336" ...
	I1210 06:15:34.458256  398989 cli_runner.go:164] Run: docker start default-k8s-diff-port-125336
	I1210 06:15:34.606122  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:34.751260  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:34.755727  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:34.772295  398989 kic.go:430] container "default-k8s-diff-port-125336" state is running.
	I1210 06:15:34.772778  398989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-125336
	I1210 06:15:34.795691  398989 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/config.json ...
	I1210 06:15:34.795975  398989 machine.go:94] provisionDockerMachine start ...
	I1210 06:15:34.796067  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:34.815579  398989 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:34.815958  398989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1210 06:15:34.815979  398989 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:15:34.816656  398989 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48068->127.0.0.1:33138: read: connection reset by peer
	I1210 06:15:34.895700  398989 cache.go:107] acquiring lock: {Name:mk0763a50664c56b0862900e71862307cba94d4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895740  398989 cache.go:107] acquiring lock: {Name:mkdd768341d1a3481ecaec697219b32d4a715834 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895735  398989 cache.go:107] acquiring lock: {Name:mkd670cede0997c7eb0e9bd388a82e1cb2741031 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895776  398989 cache.go:107] acquiring lock: {Name:mk4d792f4bac33dc8779d7cc5ff40393c94e0ea4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895776  398989 cache.go:107] acquiring lock: {Name:mkc3a95f67321b2fa8faeb966829fb60cf65d25d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895817  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:15:34.895824  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:15:34.895828  398989 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 146.45µs
	I1210 06:15:34.895834  398989 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 128.77µs
	I1210 06:15:34.895694  398989 cache.go:107] acquiring lock: {Name:mkcb073544c2d92de0e0765e38c37b4f4d2ac46b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895843  398989 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:15:34.895840  398989 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:15:34.895852  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 exists
	I1210 06:15:34.895700  398989 cache.go:107] acquiring lock: {Name:mk4839690ba979036496a7cee1de2814aaad3bf0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895863  398989 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3" took 181.132µs
	I1210 06:15:34.895880  398989 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 succeeded
	I1210 06:15:34.895908  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 exists
	I1210 06:15:34.895899  398989 cache.go:107] acquiring lock: {Name:mk796942baeaa838a47daad2be5ca7532234da42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895924  398989 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3" took 255.105µs
	I1210 06:15:34.895929  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 exists
	I1210 06:15:34.895932  398989 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 succeeded
	I1210 06:15:34.895908  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 exists
	I1210 06:15:34.895944  398989 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3" took 265.291µs
	I1210 06:15:34.895951  398989 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3" took 211.334µs
	I1210 06:15:34.895966  398989 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 succeeded
	I1210 06:15:34.895972  398989 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 succeeded
	I1210 06:15:34.895982  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1210 06:15:34.895990  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1210 06:15:34.895996  398989 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 258.502µs
	I1210 06:15:34.895999  398989 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 139.654µs
	I1210 06:15:34.896008  398989 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1210 06:15:34.896011  398989 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1210 06:15:34.896019  398989 cache.go:87] Successfully saved all images to host disk.
	I1210 06:15:37.959177  398989 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-125336
	
	I1210 06:15:37.959204  398989 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-125336"
	I1210 06:15:37.959258  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:37.979224  398989 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:37.979665  398989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1210 06:15:37.979696  398989 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-125336 && echo "default-k8s-diff-port-125336" | sudo tee /etc/hostname
	I1210 06:15:38.128128  398989 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-125336
	
	I1210 06:15:38.128197  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:38.146305  398989 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:38.146620  398989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1210 06:15:38.146653  398989 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-125336' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-125336/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-125336' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:15:38.278124  398989 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:15:38.278149  398989 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-5725/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-5725/.minikube}
	I1210 06:15:38.278167  398989 ubuntu.go:190] setting up certificates
	I1210 06:15:38.278176  398989 provision.go:84] configureAuth start
	I1210 06:15:38.278222  398989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-125336
	I1210 06:15:38.296606  398989 provision.go:143] copyHostCerts
	I1210 06:15:38.296674  398989 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem, removing ...
	I1210 06:15:38.296692  398989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem
	I1210 06:15:38.296785  398989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem (1078 bytes)
	I1210 06:15:38.296919  398989 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem, removing ...
	I1210 06:15:38.296932  398989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem
	I1210 06:15:38.296972  398989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem (1123 bytes)
	I1210 06:15:38.297072  398989 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem, removing ...
	I1210 06:15:38.297098  398989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem
	I1210 06:15:38.297140  398989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem (1679 bytes)
	I1210 06:15:38.297233  398989 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-125336 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-125336 localhost minikube]
	I1210 06:15:38.401725  398989 provision.go:177] copyRemoteCerts
	I1210 06:15:38.401781  398989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:15:38.401814  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:38.419489  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:38.515784  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:15:38.532680  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1210 06:15:38.549493  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:15:38.565601  398989 provision.go:87] duration metric: took 287.41ms to configureAuth
	I1210 06:15:38.565627  398989 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:15:38.565820  398989 config.go:182] Loaded profile config "default-k8s-diff-port-125336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:15:38.565943  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:38.583842  398989 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:38.584037  398989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1210 06:15:38.584055  398989 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:15:38.911289  398989 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:15:38.911317  398989 machine.go:97] duration metric: took 4.115324474s to provisionDockerMachine
	I1210 06:15:38.911331  398989 start.go:293] postStartSetup for "default-k8s-diff-port-125336" (driver="docker")
	I1210 06:15:38.911344  398989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:15:38.911417  398989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:15:38.911463  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:38.932694  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:39.032024  398989 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:15:39.035849  398989 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:15:39.035874  398989 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:15:39.035883  398989 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/addons for local assets ...
	I1210 06:15:39.035933  398989 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/files for local assets ...
	I1210 06:15:39.036028  398989 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem -> 92532.pem in /etc/ssl/certs
	I1210 06:15:39.036160  398989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:15:39.044513  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:15:39.061424  398989 start.go:296] duration metric: took 150.067555ms for postStartSetup
	I1210 06:15:39.061507  398989 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:15:39.061554  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:39.080318  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:39.174412  398989 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:15:39.179699  398989 fix.go:56] duration metric: took 4.748715142s for fixHost
	I1210 06:15:39.179726  398989 start.go:83] releasing machines lock for "default-k8s-diff-port-125336", held for 4.748759367s
	I1210 06:15:39.179795  398989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-125336
	I1210 06:15:39.198657  398989 ssh_runner.go:195] Run: cat /version.json
	I1210 06:15:39.198712  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:39.198747  398989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:15:39.198819  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:39.220204  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:39.220241  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:39.317475  398989 ssh_runner.go:195] Run: systemctl --version
	I1210 06:15:39.391108  398989 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:15:39.430876  398989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:15:39.435737  398989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:15:39.435812  398989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:15:39.444134  398989 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:15:39.444154  398989 start.go:496] detecting cgroup driver to use...
	I1210 06:15:39.444185  398989 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:15:39.444220  398989 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:15:39.458418  398989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:15:39.470158  398989 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:15:39.470210  398989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:15:39.485432  398989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:15:39.497705  398989 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:15:39.587848  398989 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:15:39.679325  398989 docker.go:234] disabling docker service ...
	I1210 06:15:39.679390  398989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:15:39.695744  398989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:15:39.710121  398989 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:15:39.803290  398989 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:15:39.889666  398989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:15:39.901841  398989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:15:39.916001  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:40.053859  398989 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:15:40.053907  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.064032  398989 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:15:40.064119  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.074052  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.082799  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.091069  398989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:15:40.099125  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.108348  398989 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.116442  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.124562  398989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:15:40.131659  398989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:15:40.139831  398989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:15:40.235238  398989 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:15:40.390045  398989 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:15:40.390127  398989 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:15:40.394019  398989 start.go:564] Will wait 60s for crictl version
	I1210 06:15:40.394073  398989 ssh_runner.go:195] Run: which crictl
	I1210 06:15:40.397521  398989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:15:40.422130  398989 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:15:40.422196  398989 ssh_runner.go:195] Run: crio --version
	I1210 06:15:40.449888  398989 ssh_runner.go:195] Run: crio --version
	I1210 06:15:40.482873  398989 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1210 06:15:40.484109  398989 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-125336 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:15:40.504017  398989 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1210 06:15:40.508495  398989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:15:40.519961  398989 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-125336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:15:40.520150  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:40.655009  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:40.788669  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:40.920137  398989 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 06:15:40.920210  398989 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:15:40.955931  398989 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:15:40.955957  398989 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:15:40.955966  398989 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.3 crio true true} ...
	I1210 06:15:40.956107  398989 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-125336 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:15:40.956192  398989 ssh_runner.go:195] Run: crio config
	I1210 06:15:41.004526  398989 cni.go:84] Creating CNI manager for ""
	I1210 06:15:41.004548  398989 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:15:41.004564  398989 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:15:41.004584  398989 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-125336 NodeName:default-k8s-diff-port-125336 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:15:41.004697  398989 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-125336"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:15:41.004752  398989 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1210 06:15:41.013662  398989 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:15:41.013711  398989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:15:41.021680  398989 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1210 06:15:41.034639  398989 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 06:15:41.047897  398989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1210 06:15:41.060681  398989 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:15:41.064298  398989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:15:41.074539  398989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:15:41.167815  398989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:15:41.192312  398989 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336 for IP: 192.168.103.2
	I1210 06:15:41.192334  398989 certs.go:195] generating shared ca certs ...
	I1210 06:15:41.192367  398989 certs.go:227] acquiring lock for ca certs: {Name:mka90f54d579d39a8508aa46a6cef002ccad5d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:41.192505  398989 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key
	I1210 06:15:41.192546  398989 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key
	I1210 06:15:41.192557  398989 certs.go:257] generating profile certs ...
	I1210 06:15:41.192643  398989 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/client.key
	I1210 06:15:41.192694  398989 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/apiserver.key.75b93134
	I1210 06:15:41.192729  398989 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/proxy-client.key
	I1210 06:15:41.192855  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253.pem (1338 bytes)
	W1210 06:15:41.192897  398989 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253_empty.pem, impossibly tiny 0 bytes
	I1210 06:15:41.192910  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:15:41.192952  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:15:41.192986  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:15:41.193016  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem (1679 bytes)
	I1210 06:15:41.193074  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:15:41.193841  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:15:41.212216  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:15:41.230779  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:15:41.249215  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:15:41.273141  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1210 06:15:41.291653  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:15:41.308892  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:15:41.328983  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 06:15:41.348815  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253.pem --> /usr/share/ca-certificates/9253.pem (1338 bytes)
	I1210 06:15:41.369178  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /usr/share/ca-certificates/92532.pem (1708 bytes)
	I1210 06:15:41.390044  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:15:41.407887  398989 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:15:41.422822  398989 ssh_runner.go:195] Run: openssl version
	I1210 06:15:41.430217  398989 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:41.438931  398989 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:15:41.447682  398989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:41.451942  398989 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:41.451995  398989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:41.496117  398989 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:15:41.504580  398989 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9253.pem
	I1210 06:15:41.512960  398989 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9253.pem /etc/ssl/certs/9253.pem
	I1210 06:15:41.521564  398989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9253.pem
	I1210 06:15:41.525244  398989 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:37 /usr/share/ca-certificates/9253.pem
	I1210 06:15:41.525308  398989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9253.pem
	I1210 06:15:41.564172  398989 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:15:41.572852  398989 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/92532.pem
	I1210 06:15:41.580900  398989 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/92532.pem /etc/ssl/certs/92532.pem
	I1210 06:15:41.588301  398989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92532.pem
	I1210 06:15:41.592675  398989 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:37 /usr/share/ca-certificates/92532.pem
	I1210 06:15:41.592721  398989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92532.pem
	I1210 06:15:41.637108  398989 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:15:41.645490  398989 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:15:41.649879  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:15:41.690638  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:15:41.747836  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:15:41.800228  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:15:41.862694  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:15:41.914250  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:15:41.958747  398989 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-125336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:15:41.959041  398989 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:15:41.959166  398989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:15:41.998590  398989 cri.go:89] found id: "92cdc11606d33aee3d477bf6cbe4ab80332206fde18c217d524f557e526b0285"
	I1210 06:15:41.998610  398989 cri.go:89] found id: "2dded97e81369efefb822c9b0c8d6dfd3bbd053fe93054ad3a81cdce1d76f368"
	I1210 06:15:41.998616  398989 cri.go:89] found id: "355b450a39b31a387be491afe63facd495d64617f6108b0a4b1b5123f1758d16"
	I1210 06:15:41.998621  398989 cri.go:89] found id: "4492dccb6c585536103a7303143f56d37e8a4fcd9cebebf3e45723b510e06b9d"
	I1210 06:15:41.998625  398989 cri.go:89] found id: ""
	I1210 06:15:41.998665  398989 ssh_runner.go:195] Run: sudo runc list -f json
	W1210 06:15:42.012230  398989 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:15:42Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:15:42.012308  398989 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:15:42.023047  398989 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:15:42.023062  398989 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:15:42.023133  398989 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:15:42.032028  398989 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:15:42.033327  398989 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-125336" does not appear in /home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:15:42.034299  398989 kubeconfig.go:62] /home/jenkins/minikube-integration/22094-5725/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-125336" cluster setting kubeconfig missing "default-k8s-diff-port-125336" context setting]
	I1210 06:15:42.035703  398989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/kubeconfig: {Name:mkfa60e97179780abb80cddf99075aed36301884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:42.037888  398989 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:15:42.047350  398989 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1210 06:15:42.047375  398989 kubeadm.go:602] duration metric: took 24.306597ms to restartPrimaryControlPlane
	I1210 06:15:42.047383  398989 kubeadm.go:403] duration metric: took 88.644178ms to StartCluster
	I1210 06:15:42.047399  398989 settings.go:142] acquiring lock: {Name:mk8c38e27b37253ca8cb2a2adf6342f0db270902 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:42.047471  398989 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:15:42.049858  398989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/kubeconfig: {Name:mkfa60e97179780abb80cddf99075aed36301884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:42.050141  398989 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:15:42.050363  398989 config.go:182] Loaded profile config "default-k8s-diff-port-125336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:15:42.050409  398989 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:15:42.050484  398989 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-125336"
	I1210 06:15:42.050502  398989 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-125336"
	W1210 06:15:42.050511  398989 addons.go:248] addon storage-provisioner should already be in state true
	I1210 06:15:42.050535  398989 host.go:66] Checking if "default-k8s-diff-port-125336" exists ...
	I1210 06:15:42.051015  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:42.051175  398989 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-125336"
	I1210 06:15:42.051191  398989 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-125336"
	W1210 06:15:42.051199  398989 addons.go:248] addon dashboard should already be in state true
	I1210 06:15:42.051223  398989 host.go:66] Checking if "default-k8s-diff-port-125336" exists ...
	I1210 06:15:42.051559  398989 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-125336"
	I1210 06:15:42.051618  398989 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-125336"
	I1210 06:15:42.051583  398989 out.go:179] * Verifying Kubernetes components...
	I1210 06:15:42.051661  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:42.051950  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:42.053296  398989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:15:42.082195  398989 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:15:42.082199  398989 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 06:15:42.083378  398989 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:15:42.083403  398989 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1210 06:15:37.646711  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	W1210 06:15:39.648042  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	W1210 06:15:41.648596  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	I1210 06:15:42.083414  398989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:15:42.083554  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:42.086520  398989 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-125336"
	W1210 06:15:42.086542  398989 addons.go:248] addon default-storageclass should already be in state true
	I1210 06:15:42.086569  398989 host.go:66] Checking if "default-k8s-diff-port-125336" exists ...
	I1210 06:15:42.086807  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 06:15:42.086824  398989 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 06:15:42.086879  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:42.088501  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:42.127971  398989 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:15:42.127995  398989 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:15:42.128058  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:42.131157  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:42.131148  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:42.163643  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:42.238425  398989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:15:42.261214  398989 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-125336" to be "Ready" ...
	I1210 06:15:42.266856  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 06:15:42.266878  398989 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 06:15:42.273292  398989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:15:42.296500  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 06:15:42.296642  398989 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 06:15:42.316168  398989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:15:42.322727  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 06:15:42.322747  398989 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 06:15:42.342110  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 06:15:42.342132  398989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 06:15:42.364017  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 06:15:42.364037  398989 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 06:15:42.383601  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 06:15:42.383628  398989 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 06:15:42.400222  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 06:15:42.400267  398989 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 06:15:42.413822  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 06:15:42.413841  398989 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 06:15:42.428985  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:15:42.429002  398989 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 06:15:42.445006  398989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:15:43.730716  398989 node_ready.go:49] node "default-k8s-diff-port-125336" is "Ready"
	I1210 06:15:43.730761  398989 node_ready.go:38] duration metric: took 1.469517861s for node "default-k8s-diff-port-125336" to be "Ready" ...
	I1210 06:15:43.730780  398989 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:15:43.730833  398989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:15:44.295467  398989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.022145188s)
	I1210 06:15:44.295527  398989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.979317742s)
	I1210 06:15:44.295605  398989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.850565953s)
	I1210 06:15:44.295731  398989 api_server.go:72] duration metric: took 2.245559846s to wait for apiserver process to appear ...
	I1210 06:15:44.295748  398989 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:15:44.295770  398989 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1210 06:15:44.297453  398989 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-125336 addons enable metrics-server
	
	I1210 06:15:44.301230  398989 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:44.301258  398989 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:44.307227  398989 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	
	
	==> CRI-O <==
	Dec 10 06:15:15 no-preload-468539 crio[565]: time="2025-12-10T06:15:15.678925704Z" level=info msg="Started container" PID=1733 containerID=c8af261928392d63f319119f75b653a03be259dca90cdca37ada5de84e0d7ee6 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5nqcq/dashboard-metrics-scraper id=73712bb3-4cb8-47c0-9feb-2feab5253356 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e6727c53c68d2c992cba4054dbdf81622ffc08b7306c127fbb56a5160c979dd7
	Dec 10 06:15:15 no-preload-468539 crio[565]: time="2025-12-10T06:15:15.726716536Z" level=info msg="Removing container: 1506bb38668897224f9ced2f4e8bdaf4c60f92f6b750cfe07e7e8dbbdfd3d49c" id=3b469644-143d-482c-a337-3f6a5d53ab4b name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:15:15 no-preload-468539 crio[565]: time="2025-12-10T06:15:15.738228056Z" level=info msg="Removed container 1506bb38668897224f9ced2f4e8bdaf4c60f92f6b750cfe07e7e8dbbdfd3d49c: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5nqcq/dashboard-metrics-scraper" id=3b469644-143d-482c-a337-3f6a5d53ab4b name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:15:22 no-preload-468539 crio[565]: time="2025-12-10T06:15:22.749029815Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=81195327-ba82-4ec7-99dd-50101f8067c7 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:15:22 no-preload-468539 crio[565]: time="2025-12-10T06:15:22.750046409Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1513c5b2-0cfd-4647-8887-243037221edc name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:15:22 no-preload-468539 crio[565]: time="2025-12-10T06:15:22.751116572Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=d6a37c20-3bfd-4c43-8680-04be99c41567 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:15:22 no-preload-468539 crio[565]: time="2025-12-10T06:15:22.751262037Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:22 no-preload-468539 crio[565]: time="2025-12-10T06:15:22.755825973Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:22 no-preload-468539 crio[565]: time="2025-12-10T06:15:22.756031761Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/dd69b0dd8a5baebc2cc2c78b1afc9eacc92bfd6dedb35fdc04251c36071ca1a2/merged/etc/passwd: no such file or directory"
	Dec 10 06:15:22 no-preload-468539 crio[565]: time="2025-12-10T06:15:22.756067796Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/dd69b0dd8a5baebc2cc2c78b1afc9eacc92bfd6dedb35fdc04251c36071ca1a2/merged/etc/group: no such file or directory"
	Dec 10 06:15:22 no-preload-468539 crio[565]: time="2025-12-10T06:15:22.756348049Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:22 no-preload-468539 crio[565]: time="2025-12-10T06:15:22.788237968Z" level=info msg="Created container 25a05e04e545d4fcac6f1f5ef4f9b0f774b269e137a1d704432557c704114a3d: kube-system/storage-provisioner/storage-provisioner" id=d6a37c20-3bfd-4c43-8680-04be99c41567 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:15:22 no-preload-468539 crio[565]: time="2025-12-10T06:15:22.788811605Z" level=info msg="Starting container: 25a05e04e545d4fcac6f1f5ef4f9b0f774b269e137a1d704432557c704114a3d" id=fa3a0345-b5d3-4587-b31d-069cbe302488 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:15:22 no-preload-468539 crio[565]: time="2025-12-10T06:15:22.790747124Z" level=info msg="Started container" PID=1747 containerID=25a05e04e545d4fcac6f1f5ef4f9b0f774b269e137a1d704432557c704114a3d description=kube-system/storage-provisioner/storage-provisioner id=fa3a0345-b5d3-4587-b31d-069cbe302488 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2eef2e1954424a7df92088db2d46bd8b6641aedc8d37aaa4c14ed5f7a23e8ebe
	Dec 10 06:15:39 no-preload-468539 crio[565]: time="2025-12-10T06:15:39.620872787Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=55e4be39-fe89-4202-bb61-639cc6b0e573 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:15:39 no-preload-468539 crio[565]: time="2025-12-10T06:15:39.622047683Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9fe26ab2-3493-4dd7-a7f5-c4558db65dd6 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:15:39 no-preload-468539 crio[565]: time="2025-12-10T06:15:39.623515086Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5nqcq/dashboard-metrics-scraper" id=f10116cf-d95f-4123-8ce8-75f4e1b2f865 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:15:39 no-preload-468539 crio[565]: time="2025-12-10T06:15:39.62366538Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:39 no-preload-468539 crio[565]: time="2025-12-10T06:15:39.629497691Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:39 no-preload-468539 crio[565]: time="2025-12-10T06:15:39.629989246Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:39 no-preload-468539 crio[565]: time="2025-12-10T06:15:39.672016424Z" level=info msg="Created container c8647f49d1ed1196648ae64e5bff3a7cae06e61954f6adebfbe20ab63be11c68: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5nqcq/dashboard-metrics-scraper" id=f10116cf-d95f-4123-8ce8-75f4e1b2f865 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:15:39 no-preload-468539 crio[565]: time="2025-12-10T06:15:39.672687004Z" level=info msg="Starting container: c8647f49d1ed1196648ae64e5bff3a7cae06e61954f6adebfbe20ab63be11c68" id=b16d02ae-ef9c-4d6f-bf7d-5204b661e0cb name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:15:39 no-preload-468539 crio[565]: time="2025-12-10T06:15:39.674828531Z" level=info msg="Started container" PID=1785 containerID=c8647f49d1ed1196648ae64e5bff3a7cae06e61954f6adebfbe20ab63be11c68 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5nqcq/dashboard-metrics-scraper id=b16d02ae-ef9c-4d6f-bf7d-5204b661e0cb name=/runtime.v1.RuntimeService/StartContainer sandboxID=e6727c53c68d2c992cba4054dbdf81622ffc08b7306c127fbb56a5160c979dd7
	Dec 10 06:15:39 no-preload-468539 crio[565]: time="2025-12-10T06:15:39.795014427Z" level=info msg="Removing container: c8af261928392d63f319119f75b653a03be259dca90cdca37ada5de84e0d7ee6" id=f3c63514-06eb-401b-b01f-15269e1beaf8 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:15:39 no-preload-468539 crio[565]: time="2025-12-10T06:15:39.805638062Z" level=info msg="Removed container c8af261928392d63f319119f75b653a03be259dca90cdca37ada5de84e0d7ee6: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5nqcq/dashboard-metrics-scraper" id=f3c63514-06eb-401b-b01f-15269e1beaf8 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	c8647f49d1ed1       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           7 seconds ago       Exited              dashboard-metrics-scraper   3                   e6727c53c68d2       dashboard-metrics-scraper-867fb5f87b-5nqcq   kubernetes-dashboard
	25a05e04e545d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   2eef2e1954424       storage-provisioner                          kube-system
	12c94613744db       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   48 seconds ago      Running             kubernetes-dashboard        0                   6d4c98a0efbd0       kubernetes-dashboard-b84665fb8-lbt26         kubernetes-dashboard
	780f17677488d       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           55 seconds ago      Running             coredns                     0                   aa82329aff53f       coredns-7d764666f9-tnm7t                     kube-system
	222042b7b377d       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   c7afc995de963       busybox                                      default
	32ee946f5e3f4       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   b5686fb99c88c       kindnet-wqxf2                                kube-system
	727c24c7f1527       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                           55 seconds ago      Running             kube-proxy                  0                   962d973101a32       kube-proxy-ngf5r                             kube-system
	befbdb3ebe205       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   2eef2e1954424       storage-provisioner                          kube-system
	986b3c2f0cda8       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           58 seconds ago      Running             etcd                        0                   95e96ff4c8f4a       etcd-no-preload-468539                       kube-system
	87175e8498ad3       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                           58 seconds ago      Running             kube-scheduler              0                   306b145c1b33c       kube-scheduler-no-preload-468539             kube-system
	ec6692c835d1d       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                           58 seconds ago      Running             kube-controller-manager     0                   9f6751bcfc287       kube-controller-manager-no-preload-468539    kube-system
	c134cc07c343e       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                           58 seconds ago      Running             kube-apiserver              0                   f11720e704497       kube-apiserver-no-preload-468539             kube-system
	
	
	==> coredns [780f17677488d4e3d342bfbbfc968a945a742b5fb3447af9c987bb95762b3366] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:41645 - 56675 "HINFO IN 1064879011859968781.4748841350176556665. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.152384586s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-468539
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-468539
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602
	                    minikube.k8s.io/name=no-preload-468539
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_13_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:13:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-468539
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:15:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:15:22 +0000   Wed, 10 Dec 2025 06:13:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:15:22 +0000   Wed, 10 Dec 2025 06:13:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:15:22 +0000   Wed, 10 Dec 2025 06:13:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 06:15:22 +0000   Wed, 10 Dec 2025 06:14:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-468539
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                bc6f673e-f944-4d8e-86ab-fb27468ab4df
	  Boot ID:                    b1b789e7-29ca-41f0-9541-8c4ef16372aa
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-7d764666f9-tnm7t                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     112s
	  kube-system                 etcd-no-preload-468539                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-wqxf2                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-no-preload-468539              250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-no-preload-468539     200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-ngf5r                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-no-preload-468539              100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-5nqcq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-lbt26          0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  113s  node-controller  Node no-preload-468539 event: Registered Node no-preload-468539 in Controller
	  Normal  RegisteredNode  53s   node-controller  Node no-preload-468539 event: Registered Node no-preload-468539 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e ac 6a 3a 10 14 08 06
	[  +0.000389] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e1 45 1e 59 dc 08 06
	[ +12.231886] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff aa b6 c3 b5 b8 e1 08 06
	[  +0.018522] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff f6 5b 96 ba 91 6c 08 06
	[Dec10 06:13] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 91 ba 36 9a ae 08 06
	[  +0.002987] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 7f a1 c5 f7 73 08 06
	[  +1.205570] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 04 b2 ab d7 49 08 06
	[  +4.623767] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 10 2d 23 5f e6 08 06
	[  +0.000315] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 5b 96 ba 91 6c 08 06
	[ +12.537493] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 fa d0 2a 46 66 08 06
	[  +0.000395] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 16 04 b2 ab d7 49 08 06
	[ +31.413502] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 1b 61 8f e3 57 08 06
	[  +0.000352] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fa 91 ba 36 9a ae 08 06
	
	
	==> etcd [986b3c2f0cda833eb6ebd4b6f5458a0e267bb8b83d3a119c68be6281e7585474] <==
	{"level":"info","ts":"2025-12-10T06:14:49.274144Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-10T06:14:49.274479Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-10T06:14:49.274523Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-10T06:14:49.273402Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-10T06:14:49.853814Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-10T06:14:49.853924Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-10T06:14:49.853998Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-10T06:14:49.854051Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-10T06:14:49.854096Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-12-10T06:14:49.855061Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-10T06:14:49.855178Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-10T06:14:49.855343Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-12-10T06:14:49.855382Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-10T06:14:49.858376Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:no-preload-468539 ClientURLs:[https://192.168.94.2:2379]}","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-10T06:14:49.858517Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-10T06:14:49.858507Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-10T06:14:49.860576Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-10T06:14:49.860808Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-10T06:14:49.861005Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-10T06:14:49.861367Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-10T06:14:49.865399Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-10T06:14:49.865436Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"warn","ts":"2025-12-10T06:15:01.375636Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.815671ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766735193024483 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-zvaolmidp565taklyuh7zybt5e\" mod_revision:461 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-zvaolmidp565taklyuh7zybt5e\" value_size:608 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-zvaolmidp565taklyuh7zybt5e\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-10T06:15:01.375917Z","caller":"traceutil/trace.go:172","msg":"trace[1812910808] transaction","detail":"{read_only:false; response_revision:590; number_of_response:1; }","duration":"207.267747ms","start":"2025-12-10T06:15:01.168635Z","end":"2025-12-10T06:15:01.375903Z","steps":["trace[1812910808] 'process raft request'  (duration: 207.187415ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T06:15:01.375948Z","caller":"traceutil/trace.go:172","msg":"trace[225420387] transaction","detail":"{read_only:false; response_revision:589; number_of_response:1; }","duration":"237.159161ms","start":"2025-12-10T06:15:01.138776Z","end":"2025-12-10T06:15:01.375935Z","steps":["trace[225420387] 'process raft request'  (duration: 108.562159ms)","trace[225420387] 'compare'  (duration: 127.681702ms)"],"step_count":2}
	
	
	==> kernel <==
	 06:15:47 up 58 min,  0 user,  load average: 4.75, 4.55, 2.97
	Linux no-preload-468539 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [32ee946f5e3f4d42656bffcedb68d2f90dfd63a4a50ee17ca9f5e5f823cabf61] <==
	I1210 06:14:52.232582       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 06:14:52.232883       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1210 06:14:52.233091       1 main.go:148] setting mtu 1500 for CNI 
	I1210 06:14:52.233114       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 06:14:52.233130       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T06:14:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 06:14:52.436685       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 06:14:52.436720       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 06:14:52.436746       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 06:14:52.436887       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 06:14:52.837481       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 06:14:52.837511       1 metrics.go:72] Registering metrics
	I1210 06:14:52.837589       1 controller.go:711] "Syncing nftables rules"
	I1210 06:15:02.436564       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1210 06:15:02.436628       1 main.go:301] handling current node
	I1210 06:15:12.437322       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1210 06:15:12.437379       1 main.go:301] handling current node
	I1210 06:15:22.436857       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1210 06:15:22.436909       1 main.go:301] handling current node
	I1210 06:15:32.440167       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1210 06:15:32.440206       1 main.go:301] handling current node
	I1210 06:15:42.438213       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1210 06:15:42.438249       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c134cc07c343ee0eec86fdc21ea9f07ab5dc05344377ced872b852a9c514a84c] <==
	I1210 06:14:50.929359       1 autoregister_controller.go:144] Starting autoregister controller
	I1210 06:14:50.929365       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 06:14:50.929371       1 cache.go:39] Caches are synced for autoregister controller
	I1210 06:14:50.929538       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1210 06:14:50.929588       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1210 06:14:50.930113       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:50.930136       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:50.932549       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:50.935474       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1210 06:14:50.937394       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1210 06:14:50.986701       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1210 06:14:50.992188       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:50.992259       1 policy_source.go:248] refreshing policies
	I1210 06:14:51.003144       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:14:51.240275       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 06:14:51.266624       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 06:14:51.282891       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:14:51.289672       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:14:51.296289       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 06:14:51.332554       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.120.186"}
	I1210 06:14:51.345973       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.249.195"}
	I1210 06:14:51.832196       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1210 06:14:54.515423       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 06:14:54.562558       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:14:54.612861       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [ec6692c835d1d4b482f3d9e22fd61d623beb739ec5760b5e0b356cba3798f5ef] <==
	I1210 06:14:54.066861       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1210 06:14:54.066912       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.066424       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.067110       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.067404       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.067473       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.067485       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.067455       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.067614       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.067440       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.067463       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.067924       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.067933       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.068020       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.068044       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.068162       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.068163       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.068555       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.068634       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.072572       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.073283       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:14:54.166580       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:54.166597       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1210 06:14:54.166601       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1210 06:14:54.173546       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [727c24c7f1527589ab0502be864047d735f1305e9896534e8e8fbe0d09f2be60] <==
	I1210 06:14:52.055494       1 server_linux.go:53] "Using iptables proxy"
	I1210 06:14:52.126697       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:14:52.226870       1 shared_informer.go:377] "Caches are synced"
	I1210 06:14:52.226900       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1210 06:14:52.227007       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:14:52.245676       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:14:52.245737       1 server_linux.go:136] "Using iptables Proxier"
	I1210 06:14:52.251011       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:14:52.251369       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1210 06:14:52.251387       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:14:52.252803       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:14:52.252827       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:14:52.252849       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:14:52.252854       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:14:52.252879       1 config.go:309] "Starting node config controller"
	I1210 06:14:52.252888       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:14:52.252895       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:14:52.252938       1 config.go:200] "Starting service config controller"
	I1210 06:14:52.252970       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:14:52.353876       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 06:14:52.353915       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 06:14:52.354267       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [87175e8498ad3223a893f9948444ea564e4f493dc0ce2a68eed9c2e36f356f00] <==
	I1210 06:14:49.479979       1 serving.go:386] Generated self-signed cert in-memory
	W1210 06:14:50.856009       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1210 06:14:50.856144       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1210 06:14:50.856164       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1210 06:14:50.856174       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1210 06:14:50.904723       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1210 06:14:50.904765       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:14:50.908046       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 06:14:50.908175       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:14:50.908190       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:14:50.908210       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1210 06:14:51.008620       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 10 06:15:07 no-preload-468539 kubelet[715]: E1210 06:15:07.959257     715 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-468539" containerName="kube-apiserver"
	Dec 10 06:15:08 no-preload-468539 kubelet[715]: E1210 06:15:08.704670     715 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-468539" containerName="kube-apiserver"
	Dec 10 06:15:08 no-preload-468539 kubelet[715]: E1210 06:15:08.858314     715 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-468539" containerName="kube-scheduler"
	Dec 10 06:15:09 no-preload-468539 kubelet[715]: E1210 06:15:09.706572     715 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-468539" containerName="kube-scheduler"
	Dec 10 06:15:15 no-preload-468539 kubelet[715]: E1210 06:15:15.617945     715 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5nqcq" containerName="dashboard-metrics-scraper"
	Dec 10 06:15:15 no-preload-468539 kubelet[715]: I1210 06:15:15.617987     715 scope.go:122] "RemoveContainer" containerID="1506bb38668897224f9ced2f4e8bdaf4c60f92f6b750cfe07e7e8dbbdfd3d49c"
	Dec 10 06:15:15 no-preload-468539 kubelet[715]: I1210 06:15:15.725286     715 scope.go:122] "RemoveContainer" containerID="1506bb38668897224f9ced2f4e8bdaf4c60f92f6b750cfe07e7e8dbbdfd3d49c"
	Dec 10 06:15:15 no-preload-468539 kubelet[715]: E1210 06:15:15.725569     715 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5nqcq" containerName="dashboard-metrics-scraper"
	Dec 10 06:15:15 no-preload-468539 kubelet[715]: I1210 06:15:15.725601     715 scope.go:122] "RemoveContainer" containerID="c8af261928392d63f319119f75b653a03be259dca90cdca37ada5de84e0d7ee6"
	Dec 10 06:15:15 no-preload-468539 kubelet[715]: E1210 06:15:15.725793     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-5nqcq_kubernetes-dashboard(35ff61a3-2a03-4755-80ce-5f439a59c6db)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5nqcq" podUID="35ff61a3-2a03-4755-80ce-5f439a59c6db"
	Dec 10 06:15:17 no-preload-468539 kubelet[715]: E1210 06:15:17.468418     715 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5nqcq" containerName="dashboard-metrics-scraper"
	Dec 10 06:15:17 no-preload-468539 kubelet[715]: I1210 06:15:17.468458     715 scope.go:122] "RemoveContainer" containerID="c8af261928392d63f319119f75b653a03be259dca90cdca37ada5de84e0d7ee6"
	Dec 10 06:15:17 no-preload-468539 kubelet[715]: E1210 06:15:17.468690     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-5nqcq_kubernetes-dashboard(35ff61a3-2a03-4755-80ce-5f439a59c6db)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5nqcq" podUID="35ff61a3-2a03-4755-80ce-5f439a59c6db"
	Dec 10 06:15:22 no-preload-468539 kubelet[715]: I1210 06:15:22.748556     715 scope.go:122] "RemoveContainer" containerID="befbdb3ebe2058e17934ebd0991371f1d2a7eff5a44d52842577b04f68e5366c"
	Dec 10 06:15:28 no-preload-468539 kubelet[715]: E1210 06:15:28.263549     715 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-tnm7t" containerName="coredns"
	Dec 10 06:15:39 no-preload-468539 kubelet[715]: E1210 06:15:39.618519     715 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5nqcq" containerName="dashboard-metrics-scraper"
	Dec 10 06:15:39 no-preload-468539 kubelet[715]: I1210 06:15:39.618573     715 scope.go:122] "RemoveContainer" containerID="c8af261928392d63f319119f75b653a03be259dca90cdca37ada5de84e0d7ee6"
	Dec 10 06:15:39 no-preload-468539 kubelet[715]: I1210 06:15:39.793634     715 scope.go:122] "RemoveContainer" containerID="c8af261928392d63f319119f75b653a03be259dca90cdca37ada5de84e0d7ee6"
	Dec 10 06:15:39 no-preload-468539 kubelet[715]: E1210 06:15:39.793933     715 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5nqcq" containerName="dashboard-metrics-scraper"
	Dec 10 06:15:39 no-preload-468539 kubelet[715]: I1210 06:15:39.793975     715 scope.go:122] "RemoveContainer" containerID="c8647f49d1ed1196648ae64e5bff3a7cae06e61954f6adebfbe20ab63be11c68"
	Dec 10 06:15:39 no-preload-468539 kubelet[715]: E1210 06:15:39.794203     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-5nqcq_kubernetes-dashboard(35ff61a3-2a03-4755-80ce-5f439a59c6db)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5nqcq" podUID="35ff61a3-2a03-4755-80ce-5f439a59c6db"
	Dec 10 06:15:42 no-preload-468539 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 06:15:42 no-preload-468539 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 06:15:42 no-preload-468539 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:15:42 no-preload-468539 systemd[1]: kubelet.service: Consumed 1.699s CPU time.
	
	
	==> kubernetes-dashboard [12c94613744db32c3f84814a4c7492788abc23f15b7ed91e6947a03dfde75487] <==
	2025/12/10 06:14:58 Starting overwatch
	2025/12/10 06:14:58 Using namespace: kubernetes-dashboard
	2025/12/10 06:14:58 Using in-cluster config to connect to apiserver
	2025/12/10 06:14:58 Using secret token for csrf signing
	2025/12/10 06:14:58 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/10 06:14:58 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/10 06:14:58 Successful initial request to the apiserver, version: v1.35.0-rc.1
	2025/12/10 06:14:58 Generating JWE encryption key
	2025/12/10 06:14:58 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/10 06:14:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/10 06:14:58 Initializing JWE encryption key from synchronized object
	2025/12/10 06:14:58 Creating in-cluster Sidecar client
	2025/12/10 06:14:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 06:14:58 Serving insecurely on HTTP port: 9090
	2025/12/10 06:15:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [25a05e04e545d4fcac6f1f5ef4f9b0f774b269e137a1d704432557c704114a3d] <==
	I1210 06:15:22.802409       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 06:15:22.809788       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 06:15:22.809842       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1210 06:15:22.811715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:26.265844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:30.527725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:34.126457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:37.181039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:40.203552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:40.208311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:15:40.208523       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 06:15:40.208713       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-468539_e02368c8-1d6c-466e-8afe-545dd566e201!
	I1210 06:15:40.209173       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3bd6fb81-e34e-4509-b3cd-dcebd24f16e8", APIVersion:"v1", ResourceVersion:"648", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-468539_e02368c8-1d6c-466e-8afe-545dd566e201 became leader
	W1210 06:15:40.210883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:40.214648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:15:40.309932       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-468539_e02368c8-1d6c-466e-8afe-545dd566e201!
	W1210 06:15:42.218607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:42.227484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:44.232092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:44.237466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:46.240710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:46.246000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [befbdb3ebe2058e17934ebd0991371f1d2a7eff5a44d52842577b04f68e5366c] <==
	I1210 06:14:52.019641       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1210 06:15:22.023050       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-468539 -n no-preload-468539
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-468539 -n no-preload-468539: exit status 2 (341.76283ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-468539 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-028500 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-028500 --alsologtostderr -v=1: exit status 80 (2.463735146s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-028500 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:15:59.708048  406118 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:15:59.708501  406118 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:15:59.708514  406118 out.go:374] Setting ErrFile to fd 2...
	I1210 06:15:59.708526  406118 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:15:59.708703  406118 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 06:15:59.708913  406118 out.go:368] Setting JSON to false
	I1210 06:15:59.708931  406118 mustload.go:66] Loading cluster: embed-certs-028500
	I1210 06:15:59.709281  406118 config.go:182] Loaded profile config "embed-certs-028500": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:15:59.709686  406118 cli_runner.go:164] Run: docker container inspect embed-certs-028500 --format={{.State.Status}}
	I1210 06:15:59.727563  406118 host.go:66] Checking if "embed-certs-028500" exists ...
	I1210 06:15:59.727812  406118 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:15:59.788287  406118 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-12-10 06:15:59.776726136 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:15:59.788896  406118 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765151505-21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765151505-21409-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-028500 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1210 06:15:59.790312  406118 out.go:179] * Pausing node embed-certs-028500 ... 
	I1210 06:15:59.791210  406118 host.go:66] Checking if "embed-certs-028500" exists ...
	I1210 06:15:59.791428  406118 ssh_runner.go:195] Run: systemctl --version
	I1210 06:15:59.791462  406118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-028500
	I1210 06:15:59.808754  406118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/embed-certs-028500/id_rsa Username:docker}
	I1210 06:15:59.901120  406118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:15:59.922219  406118 pause.go:52] kubelet running: true
	I1210 06:15:59.922325  406118 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:16:00.072588  406118 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:16:00.072658  406118 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:16:00.134070  406118 cri.go:89] found id: "cb042d2d3ee4eed59556574dcf66edb5cd45105056d9d11b95949ca636d2b0bc"
	I1210 06:16:00.134107  406118 cri.go:89] found id: "159ebdaee8f047d1f4901272cd48b5afa5c4eb9b9ab0ff33ac677eda1288666c"
	I1210 06:16:00.134114  406118 cri.go:89] found id: "eb4162f085aa839793309bbf94205b0c5774dcaef613e64be1997d6345634f6f"
	I1210 06:16:00.134119  406118 cri.go:89] found id: "7689ab2e3dacdba99303712f566c57a921880a70789c8f5a102d20e7f6731ab2"
	I1210 06:16:00.134123  406118 cri.go:89] found id: "b2ae79e89c55ea1c76b0f7bf4d2c9feb4cd3888baf3cc33684b2ee43e27c3cfd"
	I1210 06:16:00.134128  406118 cri.go:89] found id: "8cb47732447e77b684b839f080aeb3be30b5387c9465db5c1669dcfea49925dd"
	I1210 06:16:00.134133  406118 cri.go:89] found id: "9448aac68883a9dd13bef51e8981f7e636bdfe00fb0ac6083393a0705758776b"
	I1210 06:16:00.134148  406118 cri.go:89] found id: "f02f944bc389eec54d2261f9fd7c4019496559a482a7c7606927c07257c7d803"
	I1210 06:16:00.134156  406118 cri.go:89] found id: "6ef9ca2b457b0540ee957485c2781b7054801e8cedcfebc48356c9df7479410e"
	I1210 06:16:00.134164  406118 cri.go:89] found id: "a0f5ccad99d1dd768b3fa89480e72005f8d6decc3ec657c87225f531c0fd9c53"
	I1210 06:16:00.134167  406118 cri.go:89] found id: "2fcc09c4dfe399e5ac6a0dfb0339ee598b36ca0347a95eef915bf614fb98b83d"
	I1210 06:16:00.134171  406118 cri.go:89] found id: ""
	I1210 06:16:00.134225  406118 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:16:00.145285  406118 retry.go:31] will retry after 366.848394ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:16:00Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:16:00.512889  406118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:16:00.525100  406118 pause.go:52] kubelet running: false
	I1210 06:16:00.525149  406118 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:16:00.663227  406118 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:16:00.663322  406118 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:16:00.725657  406118 cri.go:89] found id: "cb042d2d3ee4eed59556574dcf66edb5cd45105056d9d11b95949ca636d2b0bc"
	I1210 06:16:00.725677  406118 cri.go:89] found id: "159ebdaee8f047d1f4901272cd48b5afa5c4eb9b9ab0ff33ac677eda1288666c"
	I1210 06:16:00.725681  406118 cri.go:89] found id: "eb4162f085aa839793309bbf94205b0c5774dcaef613e64be1997d6345634f6f"
	I1210 06:16:00.725685  406118 cri.go:89] found id: "7689ab2e3dacdba99303712f566c57a921880a70789c8f5a102d20e7f6731ab2"
	I1210 06:16:00.725688  406118 cri.go:89] found id: "b2ae79e89c55ea1c76b0f7bf4d2c9feb4cd3888baf3cc33684b2ee43e27c3cfd"
	I1210 06:16:00.725692  406118 cri.go:89] found id: "8cb47732447e77b684b839f080aeb3be30b5387c9465db5c1669dcfea49925dd"
	I1210 06:16:00.725696  406118 cri.go:89] found id: "9448aac68883a9dd13bef51e8981f7e636bdfe00fb0ac6083393a0705758776b"
	I1210 06:16:00.725698  406118 cri.go:89] found id: "f02f944bc389eec54d2261f9fd7c4019496559a482a7c7606927c07257c7d803"
	I1210 06:16:00.725701  406118 cri.go:89] found id: "6ef9ca2b457b0540ee957485c2781b7054801e8cedcfebc48356c9df7479410e"
	I1210 06:16:00.725708  406118 cri.go:89] found id: "a0f5ccad99d1dd768b3fa89480e72005f8d6decc3ec657c87225f531c0fd9c53"
	I1210 06:16:00.725711  406118 cri.go:89] found id: "2fcc09c4dfe399e5ac6a0dfb0339ee598b36ca0347a95eef915bf614fb98b83d"
	I1210 06:16:00.725714  406118 cri.go:89] found id: ""
	I1210 06:16:00.725750  406118 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:16:00.737196  406118 retry.go:31] will retry after 272.384026ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:16:00Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:16:01.010756  406118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:16:01.023682  406118 pause.go:52] kubelet running: false
	I1210 06:16:01.023731  406118 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:16:01.154749  406118 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:16:01.154835  406118 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:16:01.218723  406118 cri.go:89] found id: "cb042d2d3ee4eed59556574dcf66edb5cd45105056d9d11b95949ca636d2b0bc"
	I1210 06:16:01.218747  406118 cri.go:89] found id: "159ebdaee8f047d1f4901272cd48b5afa5c4eb9b9ab0ff33ac677eda1288666c"
	I1210 06:16:01.218751  406118 cri.go:89] found id: "eb4162f085aa839793309bbf94205b0c5774dcaef613e64be1997d6345634f6f"
	I1210 06:16:01.218755  406118 cri.go:89] found id: "7689ab2e3dacdba99303712f566c57a921880a70789c8f5a102d20e7f6731ab2"
	I1210 06:16:01.218758  406118 cri.go:89] found id: "b2ae79e89c55ea1c76b0f7bf4d2c9feb4cd3888baf3cc33684b2ee43e27c3cfd"
	I1210 06:16:01.218761  406118 cri.go:89] found id: "8cb47732447e77b684b839f080aeb3be30b5387c9465db5c1669dcfea49925dd"
	I1210 06:16:01.218764  406118 cri.go:89] found id: "9448aac68883a9dd13bef51e8981f7e636bdfe00fb0ac6083393a0705758776b"
	I1210 06:16:01.218767  406118 cri.go:89] found id: "f02f944bc389eec54d2261f9fd7c4019496559a482a7c7606927c07257c7d803"
	I1210 06:16:01.218770  406118 cri.go:89] found id: "6ef9ca2b457b0540ee957485c2781b7054801e8cedcfebc48356c9df7479410e"
	I1210 06:16:01.218775  406118 cri.go:89] found id: "a0f5ccad99d1dd768b3fa89480e72005f8d6decc3ec657c87225f531c0fd9c53"
	I1210 06:16:01.218783  406118 cri.go:89] found id: "2fcc09c4dfe399e5ac6a0dfb0339ee598b36ca0347a95eef915bf614fb98b83d"
	I1210 06:16:01.218786  406118 cri.go:89] found id: ""
	I1210 06:16:01.218828  406118 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:16:01.230266  406118 retry.go:31] will retry after 657.2511ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:16:01Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:16:01.888119  406118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:16:01.900699  406118 pause.go:52] kubelet running: false
	I1210 06:16:01.900744  406118 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:16:02.030266  406118 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:16:02.030329  406118 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:16:02.093468  406118 cri.go:89] found id: "cb042d2d3ee4eed59556574dcf66edb5cd45105056d9d11b95949ca636d2b0bc"
	I1210 06:16:02.093489  406118 cri.go:89] found id: "159ebdaee8f047d1f4901272cd48b5afa5c4eb9b9ab0ff33ac677eda1288666c"
	I1210 06:16:02.093493  406118 cri.go:89] found id: "eb4162f085aa839793309bbf94205b0c5774dcaef613e64be1997d6345634f6f"
	I1210 06:16:02.093496  406118 cri.go:89] found id: "7689ab2e3dacdba99303712f566c57a921880a70789c8f5a102d20e7f6731ab2"
	I1210 06:16:02.093499  406118 cri.go:89] found id: "b2ae79e89c55ea1c76b0f7bf4d2c9feb4cd3888baf3cc33684b2ee43e27c3cfd"
	I1210 06:16:02.093502  406118 cri.go:89] found id: "8cb47732447e77b684b839f080aeb3be30b5387c9465db5c1669dcfea49925dd"
	I1210 06:16:02.093505  406118 cri.go:89] found id: "9448aac68883a9dd13bef51e8981f7e636bdfe00fb0ac6083393a0705758776b"
	I1210 06:16:02.093508  406118 cri.go:89] found id: "f02f944bc389eec54d2261f9fd7c4019496559a482a7c7606927c07257c7d803"
	I1210 06:16:02.093510  406118 cri.go:89] found id: "6ef9ca2b457b0540ee957485c2781b7054801e8cedcfebc48356c9df7479410e"
	I1210 06:16:02.093517  406118 cri.go:89] found id: "a0f5ccad99d1dd768b3fa89480e72005f8d6decc3ec657c87225f531c0fd9c53"
	I1210 06:16:02.093542  406118 cri.go:89] found id: "2fcc09c4dfe399e5ac6a0dfb0339ee598b36ca0347a95eef915bf614fb98b83d"
	I1210 06:16:02.093548  406118 cri.go:89] found id: ""
	I1210 06:16:02.093585  406118 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:16:02.106973  406118 out.go:203] 
	W1210 06:16:02.108154  406118 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:16:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:16:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 06:16:02.108178  406118 out.go:285] * 
	* 
	W1210 06:16:02.112222  406118 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:16:02.113444  406118 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-028500 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-028500
helpers_test.go:244: (dbg) docker inspect embed-certs-028500:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "07156149803fd67a2c09058253090db2d9ca551a1a8d785f8bb58a1a70a730ef",
	        "Created": "2025-12-10T06:13:43.905625825Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 389492,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:14:57.670944175Z",
	            "FinishedAt": "2025-12-10T06:14:56.348394799Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/07156149803fd67a2c09058253090db2d9ca551a1a8d785f8bb58a1a70a730ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/07156149803fd67a2c09058253090db2d9ca551a1a8d785f8bb58a1a70a730ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/07156149803fd67a2c09058253090db2d9ca551a1a8d785f8bb58a1a70a730ef/hosts",
	        "LogPath": "/var/lib/docker/containers/07156149803fd67a2c09058253090db2d9ca551a1a8d785f8bb58a1a70a730ef/07156149803fd67a2c09058253090db2d9ca551a1a8d785f8bb58a1a70a730ef-json.log",
	        "Name": "/embed-certs-028500",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-028500:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-028500",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "07156149803fd67a2c09058253090db2d9ca551a1a8d785f8bb58a1a70a730ef",
	                "LowerDir": "/var/lib/docker/overlay2/4a3e4550b9f669f53b5c53505cbd7f6642f82125ec165205e90e6aa1a35c4b9d-init/diff:/var/lib/docker/overlay2/b62e2f8db4877fd6b32453256d2aeab173581bfdfbed6c87a5c3b6dd49dbb983/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4a3e4550b9f669f53b5c53505cbd7f6642f82125ec165205e90e6aa1a35c4b9d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4a3e4550b9f669f53b5c53505cbd7f6642f82125ec165205e90e6aa1a35c4b9d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4a3e4550b9f669f53b5c53505cbd7f6642f82125ec165205e90e6aa1a35c4b9d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-028500",
	                "Source": "/var/lib/docker/volumes/embed-certs-028500/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-028500",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-028500",
	                "name.minikube.sigs.k8s.io": "embed-certs-028500",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c5875f51895da306fa90e3352452cdbb4f10230685bc1daa30e52d4793821bb5",
	            "SandboxKey": "/var/run/docker/netns/c5875f51895d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-028500": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b8125d4cfb05aa73cd2f2d202e5458638ebd5752e96171ba51a763c87ba4071f",
	                    "EndpointID": "ac3779509b5709b3af05e1f98c60319f41c8d13c24e8f444312c9a28d3795749",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "de:8c:dc:6d:ff:b1",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-028500",
	                        "07156149803f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-028500 -n embed-certs-028500
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-028500 -n embed-certs-028500: exit status 2 (306.600109ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-028500 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-028500 logs -n 25: (1.003020615s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ image   │ old-k8s-version-725426 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ pause   │ -p old-k8s-version-725426 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ delete  │ -p old-k8s-version-725426                                                                                                                                                                                                                          │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ delete  │ -p old-k8s-version-725426                                                                                                                                                                                                                          │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ start   │ -p newest-cni-218688 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable dashboard -p embed-certs-028500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ start   │ -p embed-certs-028500 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-125336 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-125336 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable metrics-server -p newest-cni-218688 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ stop    │ -p newest-cni-218688 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable dashboard -p newest-cni-218688 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ start   │ -p newest-cni-218688 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-125336 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ start   │ -p default-k8s-diff-port-125336 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ image   │ newest-cni-218688 image list --format=json                                                                                                                                                                                                         │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ pause   │ -p newest-cni-218688 --alsologtostderr -v=1                                                                                                                                                                                                        │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ image   │ no-preload-468539 image list --format=json                                                                                                                                                                                                         │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ pause   │ -p no-preload-468539 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ delete  │ -p newest-cni-218688                                                                                                                                                                                                                               │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ delete  │ -p newest-cni-218688                                                                                                                                                                                                                               │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ delete  │ -p no-preload-468539                                                                                                                                                                                                                               │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ delete  │ -p no-preload-468539                                                                                                                                                                                                                               │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ image   │ embed-certs-028500 image list --format=json                                                                                                                                                                                                        │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ pause   │ -p embed-certs-028500 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:15:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:15:34.136263  398989 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:15:34.136365  398989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:15:34.136370  398989 out.go:374] Setting ErrFile to fd 2...
	I1210 06:15:34.136374  398989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:15:34.136589  398989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 06:15:34.137019  398989 out.go:368] Setting JSON to false
	I1210 06:15:34.138324  398989 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3478,"bootTime":1765343856,"procs":474,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:15:34.138383  398989 start.go:143] virtualization: kvm guest
	I1210 06:15:34.140369  398989 out.go:179] * [default-k8s-diff-port-125336] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:15:34.141455  398989 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:15:34.141495  398989 notify.go:221] Checking for updates...
	I1210 06:15:34.144149  398989 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:15:34.145219  398989 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:15:34.146212  398989 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube
	I1210 06:15:34.147189  398989 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:15:34.148570  398989 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:15:34.150487  398989 config.go:182] Loaded profile config "default-k8s-diff-port-125336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:15:34.151311  398989 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:15:34.181230  398989 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:15:34.181357  398989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:15:34.246485  398989 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 06:15:34.23498397 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:15:34.246649  398989 docker.go:319] overlay module found
	I1210 06:15:34.248892  398989 out.go:179] * Using the docker driver based on existing profile
	I1210 06:15:34.250044  398989 start.go:309] selected driver: docker
	I1210 06:15:34.250071  398989 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-125336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:15:34.250210  398989 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:15:34.250813  398989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:15:34.316341  398989 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 06:15:34.305292083 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:15:34.316682  398989 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:15:34.316710  398989 cni.go:84] Creating CNI manager for ""
	I1210 06:15:34.316776  398989 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:15:34.316830  398989 start.go:353] cluster config:
	{Name:default-k8s-diff-port-125336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:15:34.318321  398989 out.go:179] * Starting "default-k8s-diff-port-125336" primary control-plane node in "default-k8s-diff-port-125336" cluster
	I1210 06:15:34.319196  398989 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:15:34.320175  398989 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:15:34.321155  398989 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 06:15:34.321256  398989 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	W1210 06:15:34.344393  398989 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1210 06:15:34.347229  398989 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:15:34.347250  398989 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 06:15:34.430385  398989 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1210 06:15:34.430536  398989 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/config.json ...
	I1210 06:15:34.430685  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:34.430831  398989 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:15:34.430871  398989 start.go:360] acquireMachinesLock for default-k8s-diff-port-125336: {Name:mk1b9a5beba896eecc2201d27beab95b8159d676 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.430953  398989 start.go:364] duration metric: took 37.573µs to acquireMachinesLock for "default-k8s-diff-port-125336"
	I1210 06:15:34.430971  398989 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:15:34.430976  398989 fix.go:54] fixHost starting: 
	I1210 06:15:34.431250  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:34.454438  398989 fix.go:112] recreateIfNeeded on default-k8s-diff-port-125336: state=Stopped err=<nil>
	W1210 06:15:34.454482  398989 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:15:33.023453  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 06:15:33.023497  396996 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 06:15:33.023579  396996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:33.044470  396996 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:15:33.044498  396996 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:15:33.044561  396996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:33.055221  396996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:33.060071  396996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:33.070394  396996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:33.143159  396996 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:15:33.157435  396996 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:15:33.157507  396996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:15:33.170632  396996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:15:33.171889  396996 api_server.go:72] duration metric: took 184.694932ms to wait for apiserver process to appear ...
	I1210 06:15:33.171914  396996 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:15:33.171932  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:33.175983  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 06:15:33.176026  396996 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 06:15:33.187123  396996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:15:33.192327  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 06:15:33.192345  396996 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 06:15:33.208241  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 06:15:33.208263  396996 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 06:15:33.223466  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 06:15:33.223489  396996 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 06:15:33.239352  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 06:15:33.239373  396996 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 06:15:33.254731  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 06:15:33.254747  396996 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 06:15:33.268149  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 06:15:33.268164  396996 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 06:15:33.281962  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 06:15:33.281981  396996 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 06:15:33.294762  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:15:33.294777  396996 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 06:15:33.308261  396996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:15:34.066152  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 06:15:34.066176  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 06:15:34.066192  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:34.079065  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 06:15:34.079117  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 06:15:34.172751  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:34.179376  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:34.179407  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:34.672823  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:34.677978  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:34.678023  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:34.680262  396996 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.509569955s)
	I1210 06:15:34.680319  396996 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.493167455s)
	I1210 06:15:34.680472  396996 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.372172224s)
	I1210 06:15:34.684547  396996 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-218688 addons enable metrics-server
	
	I1210 06:15:34.693826  396996 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1210 06:15:34.695479  396996 addons.go:530] duration metric: took 1.708260214s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1210 06:15:35.172871  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:35.178128  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:35.178152  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:35.672391  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:35.676418  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1210 06:15:35.677341  396996 api_server.go:141] control plane version: v1.35.0-rc.1
	I1210 06:15:35.677363  396996 api_server.go:131] duration metric: took 2.505442988s to wait for apiserver health ...
	I1210 06:15:35.677373  396996 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:15:35.680615  396996 system_pods.go:59] 8 kube-system pods found
	I1210 06:15:35.680642  396996 system_pods.go:61] "coredns-7d764666f9-44pd7" [59f9ee36-231a-4116-a88e-60d48b054690] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1210 06:15:35.680651  396996 system_pods.go:61] "etcd-newest-cni-218688" [c27a2601-2917-44f3-966c-b554d5b92c02] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:15:35.680657  396996 system_pods.go:61] "kindnet-n75st" [33becf6b-71b4-4682-81bc-c41d280389e3] Running
	I1210 06:15:35.680665  396996 system_pods.go:61] "kube-apiserver-newest-cni-218688" [a423257c-9365-4560-865a-9de59f0aafeb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:15:35.680674  396996 system_pods.go:61] "kube-controller-manager-newest-cni-218688" [5a19eab1-194c-4d33-9aa6-5cce8ba87a10] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:15:35.680682  396996 system_pods.go:61] "kube-proxy-tlj9s" [3ff684af-caff-4db8-991a-8ba99fe5f326] Running
	I1210 06:15:35.680687  396996 system_pods.go:61] "kube-scheduler-newest-cni-218688" [8063cc2c-8c98-4490-94af-1613e4881229] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:15:35.680698  396996 system_pods.go:61] "storage-provisioner" [a10bfb27-694c-4654-a067-8f36fe743de7] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1210 06:15:35.680705  396996 system_pods.go:74] duration metric: took 3.328176ms to wait for pod list to return data ...
	I1210 06:15:35.680714  396996 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:15:35.682837  396996 default_sa.go:45] found service account: "default"
	I1210 06:15:35.682855  396996 default_sa.go:55] duration metric: took 2.134837ms for default service account to be created ...
	I1210 06:15:35.682865  396996 kubeadm.go:587] duration metric: took 2.695675575s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 06:15:35.682879  396996 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:15:35.684913  396996 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:15:35.684939  396996 node_conditions.go:123] node cpu capacity is 8
	I1210 06:15:35.684951  396996 node_conditions.go:105] duration metric: took 2.068174ms to run NodePressure ...
	I1210 06:15:35.684962  396996 start.go:242] waiting for startup goroutines ...
	I1210 06:15:35.684968  396996 start.go:247] waiting for cluster config update ...
	I1210 06:15:35.684977  396996 start.go:256] writing updated cluster config ...
	I1210 06:15:35.685255  396996 ssh_runner.go:195] Run: rm -f paused
	I1210 06:15:35.731197  396996 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-rc.1 (minor skew: 1)
	I1210 06:15:35.733185  396996 out.go:179] * Done! kubectl is now configured to use "newest-cni-218688" cluster and "default" namespace by default
	W1210 06:15:33.147258  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	W1210 06:15:35.148317  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	I1210 06:15:34.458179  398989 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-125336" ...
	I1210 06:15:34.458256  398989 cli_runner.go:164] Run: docker start default-k8s-diff-port-125336
	I1210 06:15:34.606122  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:34.751260  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:34.755727  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:34.772295  398989 kic.go:430] container "default-k8s-diff-port-125336" state is running.
	I1210 06:15:34.772778  398989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-125336
	I1210 06:15:34.795691  398989 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/config.json ...
	I1210 06:15:34.795975  398989 machine.go:94] provisionDockerMachine start ...
	I1210 06:15:34.796067  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:34.815579  398989 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:34.815958  398989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1210 06:15:34.815979  398989 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:15:34.816656  398989 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48068->127.0.0.1:33138: read: connection reset by peer
	I1210 06:15:34.895700  398989 cache.go:107] acquiring lock: {Name:mk0763a50664c56b0862900e71862307cba94d4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895740  398989 cache.go:107] acquiring lock: {Name:mkdd768341d1a3481ecaec697219b32d4a715834 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895735  398989 cache.go:107] acquiring lock: {Name:mkd670cede0997c7eb0e9bd388a82e1cb2741031 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895776  398989 cache.go:107] acquiring lock: {Name:mk4d792f4bac33dc8779d7cc5ff40393c94e0ea4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895776  398989 cache.go:107] acquiring lock: {Name:mkc3a95f67321b2fa8faeb966829fb60cf65d25d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895817  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:15:34.895824  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:15:34.895828  398989 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 146.45µs
	I1210 06:15:34.895834  398989 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 128.77µs
	I1210 06:15:34.895694  398989 cache.go:107] acquiring lock: {Name:mkcb073544c2d92de0e0765e38c37b4f4d2ac46b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895843  398989 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:15:34.895840  398989 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:15:34.895852  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 exists
	I1210 06:15:34.895700  398989 cache.go:107] acquiring lock: {Name:mk4839690ba979036496a7cee1de2814aaad3bf0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895863  398989 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3" took 181.132µs
	I1210 06:15:34.895880  398989 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 succeeded
	I1210 06:15:34.895908  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 exists
	I1210 06:15:34.895899  398989 cache.go:107] acquiring lock: {Name:mk796942baeaa838a47daad2be5ca7532234da42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895924  398989 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3" took 255.105µs
	I1210 06:15:34.895929  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 exists
	I1210 06:15:34.895932  398989 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 succeeded
	I1210 06:15:34.895908  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 exists
	I1210 06:15:34.895944  398989 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3" took 265.291µs
	I1210 06:15:34.895951  398989 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3" took 211.334µs
	I1210 06:15:34.895966  398989 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 succeeded
	I1210 06:15:34.895972  398989 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 succeeded
	I1210 06:15:34.895982  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1210 06:15:34.895990  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1210 06:15:34.895996  398989 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 258.502µs
	I1210 06:15:34.895999  398989 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 139.654µs
	I1210 06:15:34.896008  398989 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1210 06:15:34.896011  398989 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1210 06:15:34.896019  398989 cache.go:87] Successfully saved all images to host disk.
	I1210 06:15:37.959177  398989 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-125336
	
	I1210 06:15:37.959204  398989 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-125336"
	I1210 06:15:37.959258  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:37.979224  398989 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:37.979665  398989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1210 06:15:37.979696  398989 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-125336 && echo "default-k8s-diff-port-125336" | sudo tee /etc/hostname
	I1210 06:15:38.128128  398989 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-125336
	
	I1210 06:15:38.128197  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:38.146305  398989 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:38.146620  398989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1210 06:15:38.146653  398989 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-125336' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-125336/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-125336' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:15:38.278124  398989 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:15:38.278149  398989 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-5725/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-5725/.minikube}
	I1210 06:15:38.278167  398989 ubuntu.go:190] setting up certificates
	I1210 06:15:38.278176  398989 provision.go:84] configureAuth start
	I1210 06:15:38.278222  398989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-125336
	I1210 06:15:38.296606  398989 provision.go:143] copyHostCerts
	I1210 06:15:38.296674  398989 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem, removing ...
	I1210 06:15:38.296692  398989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem
	I1210 06:15:38.296785  398989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem (1078 bytes)
	I1210 06:15:38.296919  398989 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem, removing ...
	I1210 06:15:38.296932  398989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem
	I1210 06:15:38.296972  398989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem (1123 bytes)
	I1210 06:15:38.297072  398989 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem, removing ...
	I1210 06:15:38.297098  398989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem
	I1210 06:15:38.297140  398989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem (1679 bytes)
	I1210 06:15:38.297233  398989 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-125336 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-125336 localhost minikube]
	I1210 06:15:38.401725  398989 provision.go:177] copyRemoteCerts
	I1210 06:15:38.401781  398989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:15:38.401814  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:38.419489  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:38.515784  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:15:38.532680  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1210 06:15:38.549493  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:15:38.565601  398989 provision.go:87] duration metric: took 287.41ms to configureAuth
	I1210 06:15:38.565627  398989 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:15:38.565820  398989 config.go:182] Loaded profile config "default-k8s-diff-port-125336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:15:38.565943  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:38.583842  398989 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:38.584037  398989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1210 06:15:38.584055  398989 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:15:38.911289  398989 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:15:38.911317  398989 machine.go:97] duration metric: took 4.115324474s to provisionDockerMachine
	I1210 06:15:38.911331  398989 start.go:293] postStartSetup for "default-k8s-diff-port-125336" (driver="docker")
	I1210 06:15:38.911344  398989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:15:38.911417  398989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:15:38.911463  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:38.932694  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:39.032024  398989 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:15:39.035849  398989 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:15:39.035874  398989 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:15:39.035883  398989 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/addons for local assets ...
	I1210 06:15:39.035933  398989 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/files for local assets ...
	I1210 06:15:39.036028  398989 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem -> 92532.pem in /etc/ssl/certs
	I1210 06:15:39.036160  398989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:15:39.044513  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:15:39.061424  398989 start.go:296] duration metric: took 150.067555ms for postStartSetup
	I1210 06:15:39.061507  398989 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:15:39.061554  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:39.080318  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:39.174412  398989 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:15:39.179699  398989 fix.go:56] duration metric: took 4.748715142s for fixHost
	I1210 06:15:39.179726  398989 start.go:83] releasing machines lock for "default-k8s-diff-port-125336", held for 4.748759367s
	I1210 06:15:39.179795  398989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-125336
	I1210 06:15:39.198657  398989 ssh_runner.go:195] Run: cat /version.json
	I1210 06:15:39.198712  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:39.198747  398989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:15:39.198819  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:39.220204  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:39.220241  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:39.317475  398989 ssh_runner.go:195] Run: systemctl --version
	I1210 06:15:39.391108  398989 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:15:39.430876  398989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:15:39.435737  398989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:15:39.435812  398989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:15:39.444134  398989 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:15:39.444154  398989 start.go:496] detecting cgroup driver to use...
	I1210 06:15:39.444185  398989 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:15:39.444220  398989 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:15:39.458418  398989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:15:39.470158  398989 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:15:39.470210  398989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:15:39.485432  398989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:15:39.497705  398989 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:15:39.587848  398989 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:15:39.679325  398989 docker.go:234] disabling docker service ...
	I1210 06:15:39.679390  398989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:15:39.695744  398989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:15:39.710121  398989 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:15:39.803290  398989 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:15:39.889666  398989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:15:39.901841  398989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:15:39.916001  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:40.053859  398989 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:15:40.053907  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.064032  398989 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:15:40.064119  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.074052  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.082799  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.091069  398989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:15:40.099125  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.108348  398989 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.116442  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.124562  398989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:15:40.131659  398989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:15:40.139831  398989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:15:40.235238  398989 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:15:40.390045  398989 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:15:40.390127  398989 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:15:40.394019  398989 start.go:564] Will wait 60s for crictl version
	I1210 06:15:40.394073  398989 ssh_runner.go:195] Run: which crictl
	I1210 06:15:40.397521  398989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:15:40.422130  398989 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:15:40.422196  398989 ssh_runner.go:195] Run: crio --version
	I1210 06:15:40.449888  398989 ssh_runner.go:195] Run: crio --version
	I1210 06:15:40.482873  398989 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1210 06:15:40.484109  398989 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-125336 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:15:40.504017  398989 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1210 06:15:40.508495  398989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:15:40.519961  398989 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-125336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:15:40.520150  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:40.655009  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:40.788669  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:40.920137  398989 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 06:15:40.920210  398989 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:15:40.955931  398989 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:15:40.955957  398989 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:15:40.955966  398989 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.3 crio true true} ...
	I1210 06:15:40.956107  398989 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-125336 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:15:40.956192  398989 ssh_runner.go:195] Run: crio config
	I1210 06:15:41.004526  398989 cni.go:84] Creating CNI manager for ""
	I1210 06:15:41.004548  398989 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:15:41.004564  398989 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:15:41.004584  398989 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-125336 NodeName:default-k8s-diff-port-125336 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:15:41.004697  398989 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-125336"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:15:41.004752  398989 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1210 06:15:41.013662  398989 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:15:41.013711  398989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:15:41.021680  398989 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1210 06:15:41.034639  398989 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 06:15:41.047897  398989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1210 06:15:41.060681  398989 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:15:41.064298  398989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:15:41.074539  398989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:15:41.167815  398989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:15:41.192312  398989 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336 for IP: 192.168.103.2
	I1210 06:15:41.192334  398989 certs.go:195] generating shared ca certs ...
	I1210 06:15:41.192367  398989 certs.go:227] acquiring lock for ca certs: {Name:mka90f54d579d39a8508aa46a6cef002ccad5d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:41.192505  398989 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key
	I1210 06:15:41.192546  398989 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key
	I1210 06:15:41.192557  398989 certs.go:257] generating profile certs ...
	I1210 06:15:41.192643  398989 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/client.key
	I1210 06:15:41.192694  398989 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/apiserver.key.75b93134
	I1210 06:15:41.192729  398989 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/proxy-client.key
	I1210 06:15:41.192855  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253.pem (1338 bytes)
	W1210 06:15:41.192897  398989 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253_empty.pem, impossibly tiny 0 bytes
	I1210 06:15:41.192910  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:15:41.192952  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:15:41.192986  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:15:41.193016  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem (1679 bytes)
	I1210 06:15:41.193074  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:15:41.193841  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:15:41.212216  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:15:41.230779  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:15:41.249215  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:15:41.273141  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1210 06:15:41.291653  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:15:41.308892  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:15:41.328983  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 06:15:41.348815  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253.pem --> /usr/share/ca-certificates/9253.pem (1338 bytes)
	I1210 06:15:41.369178  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /usr/share/ca-certificates/92532.pem (1708 bytes)
	I1210 06:15:41.390044  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:15:41.407887  398989 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:15:41.422822  398989 ssh_runner.go:195] Run: openssl version
	I1210 06:15:41.430217  398989 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:41.438931  398989 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:15:41.447682  398989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:41.451942  398989 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:41.451995  398989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:41.496117  398989 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:15:41.504580  398989 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9253.pem
	I1210 06:15:41.512960  398989 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9253.pem /etc/ssl/certs/9253.pem
	I1210 06:15:41.521564  398989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9253.pem
	I1210 06:15:41.525244  398989 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:37 /usr/share/ca-certificates/9253.pem
	I1210 06:15:41.525308  398989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9253.pem
	I1210 06:15:41.564172  398989 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:15:41.572852  398989 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/92532.pem
	I1210 06:15:41.580900  398989 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/92532.pem /etc/ssl/certs/92532.pem
	I1210 06:15:41.588301  398989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92532.pem
	I1210 06:15:41.592675  398989 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:37 /usr/share/ca-certificates/92532.pem
	I1210 06:15:41.592721  398989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92532.pem
	I1210 06:15:41.637108  398989 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:15:41.645490  398989 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:15:41.649879  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:15:41.690638  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:15:41.747836  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:15:41.800228  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:15:41.862694  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:15:41.914250  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:15:41.958747  398989 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-125336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:15:41.959041  398989 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:15:41.959166  398989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:15:41.998590  398989 cri.go:89] found id: "92cdc11606d33aee3d477bf6cbe4ab80332206fde18c217d524f557e526b0285"
	I1210 06:15:41.998610  398989 cri.go:89] found id: "2dded97e81369efefb822c9b0c8d6dfd3bbd053fe93054ad3a81cdce1d76f368"
	I1210 06:15:41.998616  398989 cri.go:89] found id: "355b450a39b31a387be491afe63facd495d64617f6108b0a4b1b5123f1758d16"
	I1210 06:15:41.998621  398989 cri.go:89] found id: "4492dccb6c585536103a7303143f56d37e8a4fcd9cebebf3e45723b510e06b9d"
	I1210 06:15:41.998625  398989 cri.go:89] found id: ""
	I1210 06:15:41.998665  398989 ssh_runner.go:195] Run: sudo runc list -f json
	W1210 06:15:42.012230  398989 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:15:42Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:15:42.012308  398989 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:15:42.023047  398989 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:15:42.023062  398989 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:15:42.023133  398989 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:15:42.032028  398989 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:15:42.033327  398989 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-125336" does not appear in /home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:15:42.034299  398989 kubeconfig.go:62] /home/jenkins/minikube-integration/22094-5725/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-125336" cluster setting kubeconfig missing "default-k8s-diff-port-125336" context setting]
	I1210 06:15:42.035703  398989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/kubeconfig: {Name:mkfa60e97179780abb80cddf99075aed36301884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:42.037888  398989 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:15:42.047350  398989 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1210 06:15:42.047375  398989 kubeadm.go:602] duration metric: took 24.306597ms to restartPrimaryControlPlane
	I1210 06:15:42.047383  398989 kubeadm.go:403] duration metric: took 88.644178ms to StartCluster
	I1210 06:15:42.047399  398989 settings.go:142] acquiring lock: {Name:mk8c38e27b37253ca8cb2a2adf6342f0db270902 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:42.047471  398989 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:15:42.049858  398989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/kubeconfig: {Name:mkfa60e97179780abb80cddf99075aed36301884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:42.050141  398989 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:15:42.050363  398989 config.go:182] Loaded profile config "default-k8s-diff-port-125336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:15:42.050409  398989 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:15:42.050484  398989 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-125336"
	I1210 06:15:42.050502  398989 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-125336"
	W1210 06:15:42.050511  398989 addons.go:248] addon storage-provisioner should already be in state true
	I1210 06:15:42.050535  398989 host.go:66] Checking if "default-k8s-diff-port-125336" exists ...
	I1210 06:15:42.051015  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:42.051175  398989 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-125336"
	I1210 06:15:42.051191  398989 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-125336"
	W1210 06:15:42.051199  398989 addons.go:248] addon dashboard should already be in state true
	I1210 06:15:42.051223  398989 host.go:66] Checking if "default-k8s-diff-port-125336" exists ...
	I1210 06:15:42.051559  398989 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-125336"
	I1210 06:15:42.051618  398989 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-125336"
	I1210 06:15:42.051583  398989 out.go:179] * Verifying Kubernetes components...
	I1210 06:15:42.051661  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:42.051950  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:42.053296  398989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:15:42.082195  398989 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:15:42.082199  398989 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 06:15:42.083378  398989 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:15:42.083403  398989 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1210 06:15:37.646711  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	W1210 06:15:39.648042  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	W1210 06:15:41.648596  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	I1210 06:15:42.083414  398989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:15:42.083554  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:42.086520  398989 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-125336"
	W1210 06:15:42.086542  398989 addons.go:248] addon default-storageclass should already be in state true
	I1210 06:15:42.086569  398989 host.go:66] Checking if "default-k8s-diff-port-125336" exists ...
	I1210 06:15:42.086807  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 06:15:42.086824  398989 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 06:15:42.086879  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:42.088501  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:42.127971  398989 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:15:42.127995  398989 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:15:42.128058  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:42.131157  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:42.131148  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:42.163643  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:42.238425  398989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:15:42.261214  398989 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-125336" to be "Ready" ...
	I1210 06:15:42.266856  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 06:15:42.266878  398989 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 06:15:42.273292  398989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:15:42.296500  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 06:15:42.296642  398989 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 06:15:42.316168  398989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:15:42.322727  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 06:15:42.322747  398989 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 06:15:42.342110  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 06:15:42.342132  398989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 06:15:42.364017  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 06:15:42.364037  398989 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 06:15:42.383601  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 06:15:42.383628  398989 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 06:15:42.400222  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 06:15:42.400267  398989 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 06:15:42.413822  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 06:15:42.413841  398989 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 06:15:42.428985  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:15:42.429002  398989 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 06:15:42.445006  398989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:15:43.730716  398989 node_ready.go:49] node "default-k8s-diff-port-125336" is "Ready"
	I1210 06:15:43.730761  398989 node_ready.go:38] duration metric: took 1.469517861s for node "default-k8s-diff-port-125336" to be "Ready" ...
	I1210 06:15:43.730780  398989 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:15:43.730833  398989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:15:44.295467  398989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.022145188s)
	I1210 06:15:44.295527  398989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.979317742s)
	I1210 06:15:44.295605  398989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.850565953s)
	I1210 06:15:44.295731  398989 api_server.go:72] duration metric: took 2.245559846s to wait for apiserver process to appear ...
	I1210 06:15:44.295748  398989 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:15:44.295770  398989 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1210 06:15:44.297453  398989 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-125336 addons enable metrics-server
	
	I1210 06:15:44.301230  398989 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:44.301258  398989 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:44.307227  398989 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1210 06:15:44.147036  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	I1210 06:15:46.146566  389191 pod_ready.go:94] pod "coredns-66bc5c9577-8xwfc" is "Ready"
	I1210 06:15:46.146592  389191 pod_ready.go:86] duration metric: took 37.005340048s for pod "coredns-66bc5c9577-8xwfc" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:46.149120  389191 pod_ready.go:83] waiting for pod "etcd-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:46.152937  389191 pod_ready.go:94] pod "etcd-embed-certs-028500" is "Ready"
	I1210 06:15:46.152956  389191 pod_ready.go:86] duration metric: took 3.81638ms for pod "etcd-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:46.154886  389191 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:46.158540  389191 pod_ready.go:94] pod "kube-apiserver-embed-certs-028500" is "Ready"
	I1210 06:15:46.158566  389191 pod_ready.go:86] duration metric: took 3.65933ms for pod "kube-apiserver-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:46.160461  389191 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:46.345207  389191 pod_ready.go:94] pod "kube-controller-manager-embed-certs-028500" is "Ready"
	I1210 06:15:46.345232  389191 pod_ready.go:86] duration metric: took 184.75138ms for pod "kube-controller-manager-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:46.545176  389191 pod_ready.go:83] waiting for pod "kube-proxy-sr7kh" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:46.945367  389191 pod_ready.go:94] pod "kube-proxy-sr7kh" is "Ready"
	I1210 06:15:46.945391  389191 pod_ready.go:86] duration metric: took 400.193359ms for pod "kube-proxy-sr7kh" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:47.145257  389191 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:47.544937  389191 pod_ready.go:94] pod "kube-scheduler-embed-certs-028500" is "Ready"
	I1210 06:15:47.544958  389191 pod_ready.go:86] duration metric: took 399.673562ms for pod "kube-scheduler-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:47.544969  389191 pod_ready.go:40] duration metric: took 38.406618977s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:15:47.594190  389191 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1210 06:15:47.595325  389191 out.go:179] * Done! kubectl is now configured to use "embed-certs-028500" cluster and "default" namespace by default
	I1210 06:15:44.308766  398989 addons.go:530] duration metric: took 2.258355424s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1210 06:15:44.795874  398989 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1210 06:15:44.800857  398989 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:44.800883  398989 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:45.296231  398989 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1210 06:15:45.301136  398989 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1210 06:15:45.302322  398989 api_server.go:141] control plane version: v1.34.3
	I1210 06:15:45.302347  398989 api_server.go:131] duration metric: took 1.006591687s to wait for apiserver health ...
	I1210 06:15:45.302357  398989 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:15:45.306315  398989 system_pods.go:59] 8 kube-system pods found
	I1210 06:15:45.306352  398989 system_pods.go:61] "coredns-66bc5c9577-gkk6m" [0b83f27c-1359-488f-bf61-c716f522dfad] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:15:45.306367  398989 system_pods.go:61] "etcd-default-k8s-diff-port-125336" [afbeb479-99ed-44cd-b9c3-cda0c638c270] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:15:45.306382  398989 system_pods.go:61] "kindnet-lfds9" [14d4cc08-bd99-41e5-a772-b5197e8b16b6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1210 06:15:45.306398  398989 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-125336" [12a3028f-5f91-4217-bff2-527a5c4a0b4d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:15:45.306414  398989 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-125336" [ee445b76-6256-4d08-a12d-c392acecca93] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:15:45.306429  398989 system_pods.go:61] "kube-proxy-mw5sp" [94c4f93c-3851-4ed9-ae3b-7900e64abf9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 06:15:45.306439  398989 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-125336" [f045b3cd-f095-44a0-9735-47a085eb5d83] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:15:45.306446  398989 system_pods.go:61] "storage-provisioner" [d31f981a-faff-40fd-87cd-c2e5b25f8e2a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:15:45.306457  398989 system_pods.go:74] duration metric: took 4.090626ms to wait for pod list to return data ...
	I1210 06:15:45.306469  398989 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:15:45.309065  398989 default_sa.go:45] found service account: "default"
	I1210 06:15:45.309111  398989 default_sa.go:55] duration metric: took 2.635327ms for default service account to be created ...
	I1210 06:15:45.309121  398989 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 06:15:45.312161  398989 system_pods.go:86] 8 kube-system pods found
	I1210 06:15:45.312188  398989 system_pods.go:89] "coredns-66bc5c9577-gkk6m" [0b83f27c-1359-488f-bf61-c716f522dfad] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:15:45.312199  398989 system_pods.go:89] "etcd-default-k8s-diff-port-125336" [afbeb479-99ed-44cd-b9c3-cda0c638c270] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:15:45.312211  398989 system_pods.go:89] "kindnet-lfds9" [14d4cc08-bd99-41e5-a772-b5197e8b16b6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1210 06:15:45.312295  398989 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-125336" [12a3028f-5f91-4217-bff2-527a5c4a0b4d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:15:45.312334  398989 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-125336" [ee445b76-6256-4d08-a12d-c392acecca93] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:15:45.312348  398989 system_pods.go:89] "kube-proxy-mw5sp" [94c4f93c-3851-4ed9-ae3b-7900e64abf9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 06:15:45.312364  398989 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-125336" [f045b3cd-f095-44a0-9735-47a085eb5d83] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:15:45.312380  398989 system_pods.go:89] "storage-provisioner" [d31f981a-faff-40fd-87cd-c2e5b25f8e2a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:15:45.312393  398989 system_pods.go:126] duration metric: took 3.26398ms to wait for k8s-apps to be running ...
	I1210 06:15:45.312421  398989 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 06:15:45.312464  398989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:15:45.330746  398989 system_svc.go:56] duration metric: took 18.317711ms WaitForService to wait for kubelet
	I1210 06:15:45.330808  398989 kubeadm.go:587] duration metric: took 3.280637081s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:15:45.330849  398989 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:15:45.333665  398989 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:15:45.333690  398989 node_conditions.go:123] node cpu capacity is 8
	I1210 06:15:45.333707  398989 node_conditions.go:105] duration metric: took 2.852028ms to run NodePressure ...
	I1210 06:15:45.333720  398989 start.go:242] waiting for startup goroutines ...
	I1210 06:15:45.333730  398989 start.go:247] waiting for cluster config update ...
	I1210 06:15:45.333744  398989 start.go:256] writing updated cluster config ...
	I1210 06:15:45.334096  398989 ssh_runner.go:195] Run: rm -f paused
	I1210 06:15:45.338120  398989 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:15:45.341568  398989 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gkk6m" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 06:15:47.347196  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:15:49.347509  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:15:51.348265  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:15:53.847175  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:15:56.347151  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:15:58.846930  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 10 06:15:29 embed-certs-028500 crio[562]: time="2025-12-10T06:15:29.258923485Z" level=info msg="Started container" PID=1738 containerID=94724648bee22b0fbd298bcc0ff5ea7683738e8ea83f276c4eb8ec5ee8b83070 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jnrs7/dashboard-metrics-scraper id=d8f315ee-6afe-438f-9438-5ddf38e659fa name=/runtime.v1.RuntimeService/StartContainer sandboxID=6d8aaedfc177ff041f707cf9b683d3234b2ef963e9b04b428bad88ad7f5cb2b6
	Dec 10 06:15:29 embed-certs-028500 crio[562]: time="2025-12-10T06:15:29.314868778Z" level=info msg="Removing container: d1e6a61cc53e20ffbe52b6e31cf501b405374e361f3e7c39f3f61e2cb1ce5e35" id=17a2f301-463d-4953-be79-dd6269d7d3e3 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:15:29 embed-certs-028500 crio[562]: time="2025-12-10T06:15:29.323784022Z" level=info msg="Removed container d1e6a61cc53e20ffbe52b6e31cf501b405374e361f3e7c39f3f61e2cb1ce5e35: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jnrs7/dashboard-metrics-scraper" id=17a2f301-463d-4953-be79-dd6269d7d3e3 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:15:39 embed-certs-028500 crio[562]: time="2025-12-10T06:15:39.343972822Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6caf4847-f48d-43d8-88df-fdbcb2f3b20b name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:15:39 embed-certs-028500 crio[562]: time="2025-12-10T06:15:39.344937361Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=efb8d708-9c98-4d15-bb46-b5efa0f8da1e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:15:39 embed-certs-028500 crio[562]: time="2025-12-10T06:15:39.346061869Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=c79d6d2b-e4ea-4e17-919f-ca25499a4101 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:15:39 embed-certs-028500 crio[562]: time="2025-12-10T06:15:39.346229232Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:39 embed-certs-028500 crio[562]: time="2025-12-10T06:15:39.350852273Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:39 embed-certs-028500 crio[562]: time="2025-12-10T06:15:39.351038438Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/d4033eec9da64f5495301275318e7212e188cb60b543413e5107552100c1a7fc/merged/etc/passwd: no such file or directory"
	Dec 10 06:15:39 embed-certs-028500 crio[562]: time="2025-12-10T06:15:39.351075013Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/d4033eec9da64f5495301275318e7212e188cb60b543413e5107552100c1a7fc/merged/etc/group: no such file or directory"
	Dec 10 06:15:39 embed-certs-028500 crio[562]: time="2025-12-10T06:15:39.351381854Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:39 embed-certs-028500 crio[562]: time="2025-12-10T06:15:39.381287032Z" level=info msg="Created container cb042d2d3ee4eed59556574dcf66edb5cd45105056d9d11b95949ca636d2b0bc: kube-system/storage-provisioner/storage-provisioner" id=c79d6d2b-e4ea-4e17-919f-ca25499a4101 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:15:39 embed-certs-028500 crio[562]: time="2025-12-10T06:15:39.382385039Z" level=info msg="Starting container: cb042d2d3ee4eed59556574dcf66edb5cd45105056d9d11b95949ca636d2b0bc" id=378e8f98-5716-4758-9c7f-ce9ed01ff68b name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:15:39 embed-certs-028500 crio[562]: time="2025-12-10T06:15:39.384322111Z" level=info msg="Started container" PID=1753 containerID=cb042d2d3ee4eed59556574dcf66edb5cd45105056d9d11b95949ca636d2b0bc description=kube-system/storage-provisioner/storage-provisioner id=378e8f98-5716-4758-9c7f-ce9ed01ff68b name=/runtime.v1.RuntimeService/StartContainer sandboxID=f3aa85a58f9eefeb67979e39d6c968cb5bcb0cd2a589fcfd1cb4839c1b3ad10a
	Dec 10 06:15:53 embed-certs-028500 crio[562]: time="2025-12-10T06:15:53.214876328Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1f20f08b-a1eb-4691-943b-b6a5e877e170 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:15:53 embed-certs-028500 crio[562]: time="2025-12-10T06:15:53.215889396Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c0fb6676-18c1-437d-b72c-71ba5db1355a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:15:53 embed-certs-028500 crio[562]: time="2025-12-10T06:15:53.217343625Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jnrs7/dashboard-metrics-scraper" id=7ea59f70-6474-4992-9446-c80d39007a8c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:15:53 embed-certs-028500 crio[562]: time="2025-12-10T06:15:53.217471479Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:53 embed-certs-028500 crio[562]: time="2025-12-10T06:15:53.224161953Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:53 embed-certs-028500 crio[562]: time="2025-12-10T06:15:53.22468427Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:53 embed-certs-028500 crio[562]: time="2025-12-10T06:15:53.265339183Z" level=info msg="Created container a0f5ccad99d1dd768b3fa89480e72005f8d6decc3ec657c87225f531c0fd9c53: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jnrs7/dashboard-metrics-scraper" id=7ea59f70-6474-4992-9446-c80d39007a8c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:15:53 embed-certs-028500 crio[562]: time="2025-12-10T06:15:53.26592932Z" level=info msg="Starting container: a0f5ccad99d1dd768b3fa89480e72005f8d6decc3ec657c87225f531c0fd9c53" id=1bc0cc74-285c-4631-a105-6a33379a3341 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:15:53 embed-certs-028500 crio[562]: time="2025-12-10T06:15:53.267949888Z" level=info msg="Started container" PID=1788 containerID=a0f5ccad99d1dd768b3fa89480e72005f8d6decc3ec657c87225f531c0fd9c53 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jnrs7/dashboard-metrics-scraper id=1bc0cc74-285c-4631-a105-6a33379a3341 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6d8aaedfc177ff041f707cf9b683d3234b2ef963e9b04b428bad88ad7f5cb2b6
	Dec 10 06:15:53 embed-certs-028500 crio[562]: time="2025-12-10T06:15:53.39013872Z" level=info msg="Removing container: 94724648bee22b0fbd298bcc0ff5ea7683738e8ea83f276c4eb8ec5ee8b83070" id=f1860647-1a44-40dd-bd81-dc19d0c756f4 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:15:53 embed-certs-028500 crio[562]: time="2025-12-10T06:15:53.399430812Z" level=info msg="Removed container 94724648bee22b0fbd298bcc0ff5ea7683738e8ea83f276c4eb8ec5ee8b83070: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jnrs7/dashboard-metrics-scraper" id=f1860647-1a44-40dd-bd81-dc19d0c756f4 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	a0f5ccad99d1d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago       Exited              dashboard-metrics-scraper   3                   6d8aaedfc177f       dashboard-metrics-scraper-6ffb444bf9-jnrs7   kubernetes-dashboard
	cb042d2d3ee4e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   f3aa85a58f9ee       storage-provisioner                          kube-system
	2fcc09c4dfe39       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   6751fc436b742       kubernetes-dashboard-855c9754f9-vrlx4        kubernetes-dashboard
	26ad907be1f81       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   d6248c202cb59       busybox                                      default
	159ebdaee8f04       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   206396284eafe       coredns-66bc5c9577-8xwfc                     kube-system
	eb4162f085aa8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   f3aa85a58f9ee       storage-provisioner                          kube-system
	7689ab2e3dacd       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   fdbf9a1b37fcf       kindnet-6gq2z                                kube-system
	b2ae79e89c55e       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                           54 seconds ago      Running             kube-proxy                  0                   cd9092b9fa4ea       kube-proxy-sr7kh                             kube-system
	8cb47732447e7       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                           57 seconds ago      Running             kube-controller-manager     0                   f6966c4f6afc6       kube-controller-manager-embed-certs-028500   kube-system
	9448aac68883a       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                           57 seconds ago      Running             kube-apiserver              0                   1b2f6d61a1335       kube-apiserver-embed-certs-028500            kube-system
	f02f944bc389e       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                           57 seconds ago      Running             kube-scheduler              0                   01d85574f631b       kube-scheduler-embed-certs-028500            kube-system
	6ef9ca2b457b0       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           57 seconds ago      Running             etcd                        0                   9d86535702480       etcd-embed-certs-028500                      kube-system
	
	
	==> coredns [159ebdaee8f047d1f4901272cd48b5afa5c4eb9b9ab0ff33ac677eda1288666c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42041 - 238 "HINFO IN 1820425727757802405.8973029253656249968. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.88352195s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-028500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-028500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602
	                    minikube.k8s.io/name=embed-certs-028500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_14_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:14:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-028500
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:15:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:15:48 +0000   Wed, 10 Dec 2025 06:14:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:15:48 +0000   Wed, 10 Dec 2025 06:14:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:15:48 +0000   Wed, 10 Dec 2025 06:14:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 06:15:48 +0000   Wed, 10 Dec 2025 06:14:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-028500
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                cff73820-6963-4ea9-ae17-4b15b6269bbe
	  Boot ID:                    b1b789e7-29ca-41f0-9541-8c4ef16372aa
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-8xwfc                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-embed-certs-028500                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-6gq2z                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-embed-certs-028500             250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-embed-certs-028500    200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-sr7kh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-embed-certs-028500             100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-jnrs7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-vrlx4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 106s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  113s               kubelet          Node embed-certs-028500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s               kubelet          Node embed-certs-028500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s               kubelet          Node embed-certs-028500 status is now: NodeHasSufficientPID
	  Normal  Starting                 113s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s               node-controller  Node embed-certs-028500 event: Registered Node embed-certs-028500 in Controller
	  Normal  NodeReady                94s                kubelet          Node embed-certs-028500 status is now: NodeReady
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node embed-certs-028500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node embed-certs-028500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node embed-certs-028500 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                node-controller  Node embed-certs-028500 event: Registered Node embed-certs-028500 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e ac 6a 3a 10 14 08 06
	[  +0.000389] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e1 45 1e 59 dc 08 06
	[ +12.231886] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff aa b6 c3 b5 b8 e1 08 06
	[  +0.018522] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff f6 5b 96 ba 91 6c 08 06
	[Dec10 06:13] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 91 ba 36 9a ae 08 06
	[  +0.002987] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 7f a1 c5 f7 73 08 06
	[  +1.205570] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 04 b2 ab d7 49 08 06
	[  +4.623767] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 10 2d 23 5f e6 08 06
	[  +0.000315] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 5b 96 ba 91 6c 08 06
	[ +12.537493] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 fa d0 2a 46 66 08 06
	[  +0.000395] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 16 04 b2 ab d7 49 08 06
	[ +31.413502] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 1b 61 8f e3 57 08 06
	[  +0.000352] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fa 91 ba 36 9a ae 08 06
	
	
	==> etcd [6ef9ca2b457b0540ee957485c2781b7054801e8cedcfebc48356c9df7479410e] <==
	{"level":"warn","ts":"2025-12-10T06:15:06.919243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:06.933321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:06.939932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:06.946854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:06.954230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:06.960565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:06.967624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:06.974318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:06.980872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:06.988365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:06.995796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:07.001966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:07.009138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:07.018819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:07.025277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:07.033061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:07.039660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:07.046877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:07.053539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:07.060260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:07.066798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:07.083043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:07.089796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:07.096727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:07.163377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48822","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 06:16:03 up 58 min,  0 user,  load average: 4.06, 4.40, 2.95
	Linux embed-certs-028500 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7689ab2e3dacdba99303712f566c57a921880a70789c8f5a102d20e7f6731ab2] <==
	I1210 06:15:08.755335       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 06:15:08.847383       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1210 06:15:08.847552       1 main.go:148] setting mtu 1500 for CNI 
	I1210 06:15:08.847568       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 06:15:08.847592       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T06:15:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 06:15:09.050094       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 06:15:09.050155       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 06:15:09.050180       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 06:15:09.050387       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 06:15:09.450294       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 06:15:09.450325       1 metrics.go:72] Registering metrics
	I1210 06:15:09.450412       1 controller.go:711] "Syncing nftables rules"
	I1210 06:15:19.051006       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 06:15:19.051065       1 main.go:301] handling current node
	I1210 06:15:29.053843       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 06:15:29.053885       1 main.go:301] handling current node
	I1210 06:15:39.050175       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 06:15:39.050221       1 main.go:301] handling current node
	I1210 06:15:49.050254       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 06:15:49.050298       1 main.go:301] handling current node
	I1210 06:15:59.050949       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 06:15:59.050989       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9448aac68883a9dd13bef51e8981f7e636bdfe00fb0ac6083393a0705758776b] <==
	I1210 06:15:07.642886       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1210 06:15:07.642837       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1210 06:15:07.642969       1 aggregator.go:171] initial CRD sync complete...
	I1210 06:15:07.642977       1 autoregister_controller.go:144] Starting autoregister controller
	I1210 06:15:07.642983       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 06:15:07.642989       1 cache.go:39] Caches are synced for autoregister controller
	I1210 06:15:07.643187       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1210 06:15:07.643266       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1210 06:15:07.643628       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:15:07.648934       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1210 06:15:07.651107       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1210 06:15:07.680195       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:15:07.690848       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1210 06:15:07.899791       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 06:15:07.926945       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 06:15:07.944710       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:15:07.951609       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:15:07.958003       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 06:15:07.992513       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.202.85"}
	I1210 06:15:08.001042       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.248.201"}
	I1210 06:15:08.545989       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 06:15:11.333490       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 06:15:11.435009       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 06:15:11.584154       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:15:11.584156       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [8cb47732447e77b684b839f080aeb3be30b5387c9465db5c1669dcfea49925dd] <==
	I1210 06:15:10.942306       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1210 06:15:10.942361       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1210 06:15:10.942373       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1210 06:15:10.942380       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1210 06:15:10.944837       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1210 06:15:10.946031       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1210 06:15:10.948301       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1210 06:15:10.980745       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1210 06:15:10.980772       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1210 06:15:10.980819       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1210 06:15:10.980858       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1210 06:15:10.980888       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1210 06:15:10.980802       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1210 06:15:10.980904       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1210 06:15:10.981455       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1210 06:15:10.982122       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1210 06:15:10.982157       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1210 06:15:10.982165       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1210 06:15:10.983339       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1210 06:15:10.984511       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 06:15:10.995612       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 06:15:10.997787       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1210 06:15:10.998982       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1210 06:15:11.001196       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1210 06:15:11.008758       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [b2ae79e89c55ea1c76b0f7bf4d2c9feb4cd3888baf3cc33684b2ee43e27c3cfd] <==
	I1210 06:15:08.619158       1 server_linux.go:53] "Using iptables proxy"
	I1210 06:15:08.708398       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 06:15:08.809196       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 06:15:08.809227       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1210 06:15:08.809301       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:15:08.826395       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:15:08.826439       1 server_linux.go:132] "Using iptables Proxier"
	I1210 06:15:08.832164       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:15:08.832529       1 server.go:527] "Version info" version="v1.34.3"
	I1210 06:15:08.832559       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:15:08.834826       1 config.go:200] "Starting service config controller"
	I1210 06:15:08.834855       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:15:08.834913       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:15:08.834929       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:15:08.834953       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:15:08.834958       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:15:08.835130       1 config.go:309] "Starting node config controller"
	I1210 06:15:08.835149       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:15:08.835156       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:15:08.935073       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 06:15:08.935131       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 06:15:08.935231       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [f02f944bc389eec54d2261f9fd7c4019496559a482a7c7606927c07257c7d803] <==
	I1210 06:15:06.464907       1 serving.go:386] Generated self-signed cert in-memory
	W1210 06:15:07.590270       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1210 06:15:07.590414       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1210 06:15:07.590429       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1210 06:15:07.590439       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1210 06:15:07.608830       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1210 06:15:07.608909       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:15:07.611956       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:15:07.612018       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:15:07.612477       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 06:15:07.612628       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1210 06:15:07.712728       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 06:15:15 embed-certs-028500 kubelet[712]: E1210 06:15:15.275831     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jnrs7_kubernetes-dashboard(75431b9b-7240-4732-b3aa-7fd8576b7bc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jnrs7" podUID="75431b9b-7240-4732-b3aa-7fd8576b7bc8"
	Dec 10 06:15:15 embed-certs-028500 kubelet[712]: I1210 06:15:15.805730     712 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 10 06:15:16 embed-certs-028500 kubelet[712]: I1210 06:15:16.280143     712 scope.go:117] "RemoveContainer" containerID="d1e6a61cc53e20ffbe52b6e31cf501b405374e361f3e7c39f3f61e2cb1ce5e35"
	Dec 10 06:15:16 embed-certs-028500 kubelet[712]: E1210 06:15:16.280361     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jnrs7_kubernetes-dashboard(75431b9b-7240-4732-b3aa-7fd8576b7bc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jnrs7" podUID="75431b9b-7240-4732-b3aa-7fd8576b7bc8"
	Dec 10 06:15:17 embed-certs-028500 kubelet[712]: I1210 06:15:17.931032     712 scope.go:117] "RemoveContainer" containerID="d1e6a61cc53e20ffbe52b6e31cf501b405374e361f3e7c39f3f61e2cb1ce5e35"
	Dec 10 06:15:17 embed-certs-028500 kubelet[712]: E1210 06:15:17.931297     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jnrs7_kubernetes-dashboard(75431b9b-7240-4732-b3aa-7fd8576b7bc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jnrs7" podUID="75431b9b-7240-4732-b3aa-7fd8576b7bc8"
	Dec 10 06:15:18 embed-certs-028500 kubelet[712]: I1210 06:15:18.525904     712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vrlx4" podStartSLOduration=1.796583206 podStartE2EDuration="7.525880861s" podCreationTimestamp="2025-12-10 06:15:11 +0000 UTC" firstStartedPulling="2025-12-10 06:15:11.844987119 +0000 UTC m=+6.721647046" lastFinishedPulling="2025-12-10 06:15:17.574284767 +0000 UTC m=+12.450944701" observedRunningTime="2025-12-10 06:15:18.295876585 +0000 UTC m=+13.172536533" watchObservedRunningTime="2025-12-10 06:15:18.525880861 +0000 UTC m=+13.402540810"
	Dec 10 06:15:29 embed-certs-028500 kubelet[712]: I1210 06:15:29.214524     712 scope.go:117] "RemoveContainer" containerID="d1e6a61cc53e20ffbe52b6e31cf501b405374e361f3e7c39f3f61e2cb1ce5e35"
	Dec 10 06:15:29 embed-certs-028500 kubelet[712]: I1210 06:15:29.313585     712 scope.go:117] "RemoveContainer" containerID="d1e6a61cc53e20ffbe52b6e31cf501b405374e361f3e7c39f3f61e2cb1ce5e35"
	Dec 10 06:15:29 embed-certs-028500 kubelet[712]: I1210 06:15:29.313821     712 scope.go:117] "RemoveContainer" containerID="94724648bee22b0fbd298bcc0ff5ea7683738e8ea83f276c4eb8ec5ee8b83070"
	Dec 10 06:15:29 embed-certs-028500 kubelet[712]: E1210 06:15:29.314075     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jnrs7_kubernetes-dashboard(75431b9b-7240-4732-b3aa-7fd8576b7bc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jnrs7" podUID="75431b9b-7240-4732-b3aa-7fd8576b7bc8"
	Dec 10 06:15:37 embed-certs-028500 kubelet[712]: I1210 06:15:37.931355     712 scope.go:117] "RemoveContainer" containerID="94724648bee22b0fbd298bcc0ff5ea7683738e8ea83f276c4eb8ec5ee8b83070"
	Dec 10 06:15:37 embed-certs-028500 kubelet[712]: E1210 06:15:37.931568     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jnrs7_kubernetes-dashboard(75431b9b-7240-4732-b3aa-7fd8576b7bc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jnrs7" podUID="75431b9b-7240-4732-b3aa-7fd8576b7bc8"
	Dec 10 06:15:39 embed-certs-028500 kubelet[712]: I1210 06:15:39.343549     712 scope.go:117] "RemoveContainer" containerID="eb4162f085aa839793309bbf94205b0c5774dcaef613e64be1997d6345634f6f"
	Dec 10 06:15:53 embed-certs-028500 kubelet[712]: I1210 06:15:53.214463     712 scope.go:117] "RemoveContainer" containerID="94724648bee22b0fbd298bcc0ff5ea7683738e8ea83f276c4eb8ec5ee8b83070"
	Dec 10 06:15:53 embed-certs-028500 kubelet[712]: I1210 06:15:53.388843     712 scope.go:117] "RemoveContainer" containerID="94724648bee22b0fbd298bcc0ff5ea7683738e8ea83f276c4eb8ec5ee8b83070"
	Dec 10 06:15:53 embed-certs-028500 kubelet[712]: I1210 06:15:53.389056     712 scope.go:117] "RemoveContainer" containerID="a0f5ccad99d1dd768b3fa89480e72005f8d6decc3ec657c87225f531c0fd9c53"
	Dec 10 06:15:53 embed-certs-028500 kubelet[712]: E1210 06:15:53.389269     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jnrs7_kubernetes-dashboard(75431b9b-7240-4732-b3aa-7fd8576b7bc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jnrs7" podUID="75431b9b-7240-4732-b3aa-7fd8576b7bc8"
	Dec 10 06:15:57 embed-certs-028500 kubelet[712]: I1210 06:15:57.931235     712 scope.go:117] "RemoveContainer" containerID="a0f5ccad99d1dd768b3fa89480e72005f8d6decc3ec657c87225f531c0fd9c53"
	Dec 10 06:15:57 embed-certs-028500 kubelet[712]: E1210 06:15:57.931479     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jnrs7_kubernetes-dashboard(75431b9b-7240-4732-b3aa-7fd8576b7bc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jnrs7" podUID="75431b9b-7240-4732-b3aa-7fd8576b7bc8"
	Dec 10 06:16:00 embed-certs-028500 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 06:16:00 embed-certs-028500 kubelet[712]: I1210 06:16:00.048698     712 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 10 06:16:00 embed-certs-028500 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 06:16:00 embed-certs-028500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:16:00 embed-certs-028500 systemd[1]: kubelet.service: Consumed 1.663s CPU time.
	
	
	==> kubernetes-dashboard [2fcc09c4dfe399e5ac6a0dfb0339ee598b36ca0347a95eef915bf614fb98b83d] <==
	2025/12/10 06:15:17 Using namespace: kubernetes-dashboard
	2025/12/10 06:15:17 Using in-cluster config to connect to apiserver
	2025/12/10 06:15:17 Using secret token for csrf signing
	2025/12/10 06:15:17 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/10 06:15:17 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/10 06:15:17 Successful initial request to the apiserver, version: v1.34.3
	2025/12/10 06:15:17 Generating JWE encryption key
	2025/12/10 06:15:17 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/10 06:15:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/10 06:15:17 Initializing JWE encryption key from synchronized object
	2025/12/10 06:15:17 Creating in-cluster Sidecar client
	2025/12/10 06:15:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 06:15:17 Serving insecurely on HTTP port: 9090
	2025/12/10 06:15:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 06:15:17 Starting overwatch
	
	
	==> storage-provisioner [cb042d2d3ee4eed59556574dcf66edb5cd45105056d9d11b95949ca636d2b0bc] <==
	I1210 06:15:39.398134       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 06:15:39.407414       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 06:15:39.407464       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1210 06:15:39.409978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:42.865123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:47.125581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:50.723888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:53.777893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:56.800177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:56.804374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:15:56.804512       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 06:15:56.804650       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-028500_72ab12d8-28e6-48ed-a0af-4022d4c83e4a!
	I1210 06:15:56.804651       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9f1f4ad0-e1fa-4611-8756-9fd0b611cf54", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-028500_72ab12d8-28e6-48ed-a0af-4022d4c83e4a became leader
	W1210 06:15:56.806570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:56.810267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:15:56.904902       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-028500_72ab12d8-28e6-48ed-a0af-4022d4c83e4a!
	W1210 06:15:58.813454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:58.817104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:16:00.819870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:16:00.823596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:16:02.827482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:16:02.832343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [eb4162f085aa839793309bbf94205b0c5774dcaef613e64be1997d6345634f6f] <==
	I1210 06:15:08.589212       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1210 06:15:38.591523       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-028500 -n embed-certs-028500
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-028500 -n embed-certs-028500: exit status 2 (326.672766ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-028500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-028500
helpers_test.go:244: (dbg) docker inspect embed-certs-028500:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "07156149803fd67a2c09058253090db2d9ca551a1a8d785f8bb58a1a70a730ef",
	        "Created": "2025-12-10T06:13:43.905625825Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 389492,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:14:57.670944175Z",
	            "FinishedAt": "2025-12-10T06:14:56.348394799Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/07156149803fd67a2c09058253090db2d9ca551a1a8d785f8bb58a1a70a730ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/07156149803fd67a2c09058253090db2d9ca551a1a8d785f8bb58a1a70a730ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/07156149803fd67a2c09058253090db2d9ca551a1a8d785f8bb58a1a70a730ef/hosts",
	        "LogPath": "/var/lib/docker/containers/07156149803fd67a2c09058253090db2d9ca551a1a8d785f8bb58a1a70a730ef/07156149803fd67a2c09058253090db2d9ca551a1a8d785f8bb58a1a70a730ef-json.log",
	        "Name": "/embed-certs-028500",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-028500:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-028500",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "07156149803fd67a2c09058253090db2d9ca551a1a8d785f8bb58a1a70a730ef",
	                "LowerDir": "/var/lib/docker/overlay2/4a3e4550b9f669f53b5c53505cbd7f6642f82125ec165205e90e6aa1a35c4b9d-init/diff:/var/lib/docker/overlay2/b62e2f8db4877fd6b32453256d2aeab173581bfdfbed6c87a5c3b6dd49dbb983/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4a3e4550b9f669f53b5c53505cbd7f6642f82125ec165205e90e6aa1a35c4b9d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4a3e4550b9f669f53b5c53505cbd7f6642f82125ec165205e90e6aa1a35c4b9d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4a3e4550b9f669f53b5c53505cbd7f6642f82125ec165205e90e6aa1a35c4b9d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-028500",
	                "Source": "/var/lib/docker/volumes/embed-certs-028500/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-028500",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-028500",
	                "name.minikube.sigs.k8s.io": "embed-certs-028500",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c5875f51895da306fa90e3352452cdbb4f10230685bc1daa30e52d4793821bb5",
	            "SandboxKey": "/var/run/docker/netns/c5875f51895d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-028500": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b8125d4cfb05aa73cd2f2d202e5458638ebd5752e96171ba51a763c87ba4071f",
	                    "EndpointID": "ac3779509b5709b3af05e1f98c60319f41c8d13c24e8f444312c9a28d3795749",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "de:8c:dc:6d:ff:b1",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-028500",
	                        "07156149803f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-028500 -n embed-certs-028500
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-028500 -n embed-certs-028500: exit status 2 (309.049545ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-028500 logs -n 25
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ image   │ old-k8s-version-725426 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ pause   │ -p old-k8s-version-725426 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │                     │
	│ delete  │ -p old-k8s-version-725426                                                                                                                                                                                                                          │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ delete  │ -p old-k8s-version-725426                                                                                                                                                                                                                          │ old-k8s-version-725426       │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ start   │ -p newest-cni-218688 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable dashboard -p embed-certs-028500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ start   │ -p embed-certs-028500 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-125336 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-125336 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable metrics-server -p newest-cni-218688 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ stop    │ -p newest-cni-218688 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable dashboard -p newest-cni-218688 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ start   │ -p newest-cni-218688 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-125336 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ start   │ -p default-k8s-diff-port-125336 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ image   │ newest-cni-218688 image list --format=json                                                                                                                                                                                                         │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ pause   │ -p newest-cni-218688 --alsologtostderr -v=1                                                                                                                                                                                                        │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ image   │ no-preload-468539 image list --format=json                                                                                                                                                                                                         │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ pause   │ -p no-preload-468539 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ delete  │ -p newest-cni-218688                                                                                                                                                                                                                               │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ delete  │ -p newest-cni-218688                                                                                                                                                                                                                               │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ delete  │ -p no-preload-468539                                                                                                                                                                                                                               │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ delete  │ -p no-preload-468539                                                                                                                                                                                                                               │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ image   │ embed-certs-028500 image list --format=json                                                                                                                                                                                                        │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ pause   │ -p embed-certs-028500 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:15:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:15:34.136263  398989 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:15:34.136365  398989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:15:34.136370  398989 out.go:374] Setting ErrFile to fd 2...
	I1210 06:15:34.136374  398989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:15:34.136589  398989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 06:15:34.137019  398989 out.go:368] Setting JSON to false
	I1210 06:15:34.138324  398989 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3478,"bootTime":1765343856,"procs":474,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:15:34.138383  398989 start.go:143] virtualization: kvm guest
	I1210 06:15:34.140369  398989 out.go:179] * [default-k8s-diff-port-125336] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:15:34.141455  398989 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:15:34.141495  398989 notify.go:221] Checking for updates...
	I1210 06:15:34.144149  398989 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:15:34.145219  398989 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:15:34.146212  398989 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube
	I1210 06:15:34.147189  398989 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:15:34.148570  398989 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:15:34.150487  398989 config.go:182] Loaded profile config "default-k8s-diff-port-125336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:15:34.151311  398989 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:15:34.181230  398989 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:15:34.181357  398989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:15:34.246485  398989 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 06:15:34.23498397 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:15:34.246649  398989 docker.go:319] overlay module found
	I1210 06:15:34.248892  398989 out.go:179] * Using the docker driver based on existing profile
	I1210 06:15:34.250044  398989 start.go:309] selected driver: docker
	I1210 06:15:34.250071  398989 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-125336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:15:34.250210  398989 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:15:34.250813  398989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:15:34.316341  398989 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 06:15:34.305292083 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:15:34.316682  398989 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:15:34.316710  398989 cni.go:84] Creating CNI manager for ""
	I1210 06:15:34.316776  398989 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:15:34.316830  398989 start.go:353] cluster config:
	{Name:default-k8s-diff-port-125336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:15:34.318321  398989 out.go:179] * Starting "default-k8s-diff-port-125336" primary control-plane node in "default-k8s-diff-port-125336" cluster
	I1210 06:15:34.319196  398989 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:15:34.320175  398989 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:15:34.321155  398989 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 06:15:34.321256  398989 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	W1210 06:15:34.344393  398989 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1210 06:15:34.347229  398989 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:15:34.347250  398989 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 06:15:34.430385  398989 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1210 06:15:34.430536  398989 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/config.json ...
	I1210 06:15:34.430685  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:34.430831  398989 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:15:34.430871  398989 start.go:360] acquireMachinesLock for default-k8s-diff-port-125336: {Name:mk1b9a5beba896eecc2201d27beab95b8159d676 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.430953  398989 start.go:364] duration metric: took 37.573µs to acquireMachinesLock for "default-k8s-diff-port-125336"
	I1210 06:15:34.430971  398989 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:15:34.430976  398989 fix.go:54] fixHost starting: 
	I1210 06:15:34.431250  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:34.454438  398989 fix.go:112] recreateIfNeeded on default-k8s-diff-port-125336: state=Stopped err=<nil>
	W1210 06:15:34.454482  398989 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:15:33.023453  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 06:15:33.023497  396996 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 06:15:33.023579  396996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:33.044470  396996 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:15:33.044498  396996 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:15:33.044561  396996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:33.055221  396996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:33.060071  396996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:33.070394  396996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:33.143159  396996 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:15:33.157435  396996 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:15:33.157507  396996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:15:33.170632  396996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:15:33.171889  396996 api_server.go:72] duration metric: took 184.694932ms to wait for apiserver process to appear ...
	I1210 06:15:33.171914  396996 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:15:33.171932  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:33.175983  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 06:15:33.176026  396996 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 06:15:33.187123  396996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:15:33.192327  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 06:15:33.192345  396996 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 06:15:33.208241  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 06:15:33.208263  396996 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 06:15:33.223466  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 06:15:33.223489  396996 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 06:15:33.239352  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 06:15:33.239373  396996 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 06:15:33.254731  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 06:15:33.254747  396996 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 06:15:33.268149  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 06:15:33.268164  396996 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 06:15:33.281962  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 06:15:33.281981  396996 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 06:15:33.294762  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:15:33.294777  396996 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 06:15:33.308261  396996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:15:34.066152  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 06:15:34.066176  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 06:15:34.066192  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:34.079065  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 06:15:34.079117  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 06:15:34.172751  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:34.179376  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:34.179407  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:34.672823  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:34.677978  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:34.678023  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:34.680262  396996 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.509569955s)
	I1210 06:15:34.680319  396996 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.493167455s)
	I1210 06:15:34.680472  396996 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.372172224s)
	I1210 06:15:34.684547  396996 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-218688 addons enable metrics-server
	
	I1210 06:15:34.693826  396996 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1210 06:15:34.695479  396996 addons.go:530] duration metric: took 1.708260214s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1210 06:15:35.172871  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:35.178128  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:35.178152  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:35.672391  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:35.676418  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1210 06:15:35.677341  396996 api_server.go:141] control plane version: v1.35.0-rc.1
	I1210 06:15:35.677363  396996 api_server.go:131] duration metric: took 2.505442988s to wait for apiserver health ...
	I1210 06:15:35.677373  396996 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:15:35.680615  396996 system_pods.go:59] 8 kube-system pods found
	I1210 06:15:35.680642  396996 system_pods.go:61] "coredns-7d764666f9-44pd7" [59f9ee36-231a-4116-a88e-60d48b054690] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1210 06:15:35.680651  396996 system_pods.go:61] "etcd-newest-cni-218688" [c27a2601-2917-44f3-966c-b554d5b92c02] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:15:35.680657  396996 system_pods.go:61] "kindnet-n75st" [33becf6b-71b4-4682-81bc-c41d280389e3] Running
	I1210 06:15:35.680665  396996 system_pods.go:61] "kube-apiserver-newest-cni-218688" [a423257c-9365-4560-865a-9de59f0aafeb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:15:35.680674  396996 system_pods.go:61] "kube-controller-manager-newest-cni-218688" [5a19eab1-194c-4d33-9aa6-5cce8ba87a10] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:15:35.680682  396996 system_pods.go:61] "kube-proxy-tlj9s" [3ff684af-caff-4db8-991a-8ba99fe5f326] Running
	I1210 06:15:35.680687  396996 system_pods.go:61] "kube-scheduler-newest-cni-218688" [8063cc2c-8c98-4490-94af-1613e4881229] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:15:35.680698  396996 system_pods.go:61] "storage-provisioner" [a10bfb27-694c-4654-a067-8f36fe743de7] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1210 06:15:35.680705  396996 system_pods.go:74] duration metric: took 3.328176ms to wait for pod list to return data ...
	I1210 06:15:35.680714  396996 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:15:35.682837  396996 default_sa.go:45] found service account: "default"
	I1210 06:15:35.682855  396996 default_sa.go:55] duration metric: took 2.134837ms for default service account to be created ...
	I1210 06:15:35.682865  396996 kubeadm.go:587] duration metric: took 2.695675575s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 06:15:35.682879  396996 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:15:35.684913  396996 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:15:35.684939  396996 node_conditions.go:123] node cpu capacity is 8
	I1210 06:15:35.684951  396996 node_conditions.go:105] duration metric: took 2.068174ms to run NodePressure ...
	I1210 06:15:35.684962  396996 start.go:242] waiting for startup goroutines ...
	I1210 06:15:35.684968  396996 start.go:247] waiting for cluster config update ...
	I1210 06:15:35.684977  396996 start.go:256] writing updated cluster config ...
	I1210 06:15:35.685255  396996 ssh_runner.go:195] Run: rm -f paused
	I1210 06:15:35.731197  396996 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-rc.1 (minor skew: 1)
	I1210 06:15:35.733185  396996 out.go:179] * Done! kubectl is now configured to use "newest-cni-218688" cluster and "default" namespace by default
	W1210 06:15:33.147258  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	W1210 06:15:35.148317  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	I1210 06:15:34.458179  398989 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-125336" ...
	I1210 06:15:34.458256  398989 cli_runner.go:164] Run: docker start default-k8s-diff-port-125336
	I1210 06:15:34.606122  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:34.751260  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:34.755727  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:34.772295  398989 kic.go:430] container "default-k8s-diff-port-125336" state is running.
	I1210 06:15:34.772778  398989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-125336
	I1210 06:15:34.795691  398989 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/config.json ...
	I1210 06:15:34.795975  398989 machine.go:94] provisionDockerMachine start ...
	I1210 06:15:34.796067  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:34.815579  398989 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:34.815958  398989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1210 06:15:34.815979  398989 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:15:34.816656  398989 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48068->127.0.0.1:33138: read: connection reset by peer
	I1210 06:15:34.895700  398989 cache.go:107] acquiring lock: {Name:mk0763a50664c56b0862900e71862307cba94d4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895740  398989 cache.go:107] acquiring lock: {Name:mkdd768341d1a3481ecaec697219b32d4a715834 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895735  398989 cache.go:107] acquiring lock: {Name:mkd670cede0997c7eb0e9bd388a82e1cb2741031 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895776  398989 cache.go:107] acquiring lock: {Name:mk4d792f4bac33dc8779d7cc5ff40393c94e0ea4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895776  398989 cache.go:107] acquiring lock: {Name:mkc3a95f67321b2fa8faeb966829fb60cf65d25d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895817  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:15:34.895824  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:15:34.895828  398989 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 146.45µs
	I1210 06:15:34.895834  398989 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 128.77µs
	I1210 06:15:34.895694  398989 cache.go:107] acquiring lock: {Name:mkcb073544c2d92de0e0765e38c37b4f4d2ac46b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895843  398989 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:15:34.895840  398989 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:15:34.895852  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 exists
	I1210 06:15:34.895700  398989 cache.go:107] acquiring lock: {Name:mk4839690ba979036496a7cee1de2814aaad3bf0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895863  398989 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3" took 181.132µs
	I1210 06:15:34.895880  398989 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 succeeded
	I1210 06:15:34.895908  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 exists
	I1210 06:15:34.895899  398989 cache.go:107] acquiring lock: {Name:mk796942baeaa838a47daad2be5ca7532234da42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895924  398989 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3" took 255.105µs
	I1210 06:15:34.895929  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 exists
	I1210 06:15:34.895932  398989 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 succeeded
	I1210 06:15:34.895908  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 exists
	I1210 06:15:34.895944  398989 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3" took 265.291µs
	I1210 06:15:34.895951  398989 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3" took 211.334µs
	I1210 06:15:34.895966  398989 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 succeeded
	I1210 06:15:34.895972  398989 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 succeeded
	I1210 06:15:34.895982  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1210 06:15:34.895990  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1210 06:15:34.895996  398989 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 258.502µs
	I1210 06:15:34.895999  398989 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 139.654µs
	I1210 06:15:34.896008  398989 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1210 06:15:34.896011  398989 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1210 06:15:34.896019  398989 cache.go:87] Successfully saved all images to host disk.
	I1210 06:15:37.959177  398989 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-125336
	
	I1210 06:15:37.959204  398989 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-125336"
	I1210 06:15:37.959258  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:37.979224  398989 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:37.979665  398989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1210 06:15:37.979696  398989 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-125336 && echo "default-k8s-diff-port-125336" | sudo tee /etc/hostname
	I1210 06:15:38.128128  398989 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-125336
	
	I1210 06:15:38.128197  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:38.146305  398989 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:38.146620  398989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1210 06:15:38.146653  398989 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-125336' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-125336/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-125336' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:15:38.278124  398989 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:15:38.278149  398989 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-5725/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-5725/.minikube}
	I1210 06:15:38.278167  398989 ubuntu.go:190] setting up certificates
	I1210 06:15:38.278176  398989 provision.go:84] configureAuth start
	I1210 06:15:38.278222  398989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-125336
	I1210 06:15:38.296606  398989 provision.go:143] copyHostCerts
	I1210 06:15:38.296674  398989 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem, removing ...
	I1210 06:15:38.296692  398989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem
	I1210 06:15:38.296785  398989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem (1078 bytes)
	I1210 06:15:38.296919  398989 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem, removing ...
	I1210 06:15:38.296932  398989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem
	I1210 06:15:38.296972  398989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem (1123 bytes)
	I1210 06:15:38.297072  398989 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem, removing ...
	I1210 06:15:38.297098  398989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem
	I1210 06:15:38.297140  398989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem (1679 bytes)
	I1210 06:15:38.297233  398989 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-125336 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-125336 localhost minikube]
	I1210 06:15:38.401725  398989 provision.go:177] copyRemoteCerts
	I1210 06:15:38.401781  398989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:15:38.401814  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:38.419489  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:38.515784  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:15:38.532680  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1210 06:15:38.549493  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:15:38.565601  398989 provision.go:87] duration metric: took 287.41ms to configureAuth
	I1210 06:15:38.565627  398989 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:15:38.565820  398989 config.go:182] Loaded profile config "default-k8s-diff-port-125336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:15:38.565943  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:38.583842  398989 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:38.584037  398989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1210 06:15:38.584055  398989 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:15:38.911289  398989 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:15:38.911317  398989 machine.go:97] duration metric: took 4.115324474s to provisionDockerMachine
	I1210 06:15:38.911331  398989 start.go:293] postStartSetup for "default-k8s-diff-port-125336" (driver="docker")
	I1210 06:15:38.911344  398989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:15:38.911417  398989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:15:38.911463  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:38.932694  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:39.032024  398989 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:15:39.035849  398989 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:15:39.035874  398989 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:15:39.035883  398989 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/addons for local assets ...
	I1210 06:15:39.035933  398989 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/files for local assets ...
	I1210 06:15:39.036028  398989 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem -> 92532.pem in /etc/ssl/certs
	I1210 06:15:39.036160  398989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:15:39.044513  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:15:39.061424  398989 start.go:296] duration metric: took 150.067555ms for postStartSetup
	I1210 06:15:39.061507  398989 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:15:39.061554  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:39.080318  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:39.174412  398989 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:15:39.179699  398989 fix.go:56] duration metric: took 4.748715142s for fixHost
	I1210 06:15:39.179726  398989 start.go:83] releasing machines lock for "default-k8s-diff-port-125336", held for 4.748759367s
	I1210 06:15:39.179795  398989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-125336
	I1210 06:15:39.198657  398989 ssh_runner.go:195] Run: cat /version.json
	I1210 06:15:39.198712  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:39.198747  398989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:15:39.198819  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:39.220204  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:39.220241  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:39.317475  398989 ssh_runner.go:195] Run: systemctl --version
	I1210 06:15:39.391108  398989 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:15:39.430876  398989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:15:39.435737  398989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:15:39.435812  398989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:15:39.444134  398989 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:15:39.444154  398989 start.go:496] detecting cgroup driver to use...
	I1210 06:15:39.444185  398989 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:15:39.444220  398989 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:15:39.458418  398989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:15:39.470158  398989 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:15:39.470210  398989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:15:39.485432  398989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:15:39.497705  398989 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:15:39.587848  398989 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:15:39.679325  398989 docker.go:234] disabling docker service ...
	I1210 06:15:39.679390  398989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:15:39.695744  398989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:15:39.710121  398989 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:15:39.803290  398989 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:15:39.889666  398989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:15:39.901841  398989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:15:39.916001  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:40.053859  398989 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:15:40.053907  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.064032  398989 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:15:40.064119  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.074052  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.082799  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.091069  398989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:15:40.099125  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.108348  398989 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.116442  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.124562  398989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:15:40.131659  398989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:15:40.139831  398989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:15:40.235238  398989 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:15:40.390045  398989 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:15:40.390127  398989 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:15:40.394019  398989 start.go:564] Will wait 60s for crictl version
	I1210 06:15:40.394073  398989 ssh_runner.go:195] Run: which crictl
	I1210 06:15:40.397521  398989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:15:40.422130  398989 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:15:40.422196  398989 ssh_runner.go:195] Run: crio --version
	I1210 06:15:40.449888  398989 ssh_runner.go:195] Run: crio --version
	I1210 06:15:40.482873  398989 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1210 06:15:40.484109  398989 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-125336 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:15:40.504017  398989 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1210 06:15:40.508495  398989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:15:40.519961  398989 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-125336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:15:40.520150  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:40.655009  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:40.788669  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:40.920137  398989 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 06:15:40.920210  398989 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:15:40.955931  398989 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:15:40.955957  398989 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:15:40.955966  398989 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.3 crio true true} ...
	I1210 06:15:40.956107  398989 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-125336 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:15:40.956192  398989 ssh_runner.go:195] Run: crio config
	I1210 06:15:41.004526  398989 cni.go:84] Creating CNI manager for ""
	I1210 06:15:41.004548  398989 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:15:41.004564  398989 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:15:41.004584  398989 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-125336 NodeName:default-k8s-diff-port-125336 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:15:41.004697  398989 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-125336"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:15:41.004752  398989 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1210 06:15:41.013662  398989 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:15:41.013711  398989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:15:41.021680  398989 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1210 06:15:41.034639  398989 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 06:15:41.047897  398989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1210 06:15:41.060681  398989 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:15:41.064298  398989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:15:41.074539  398989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:15:41.167815  398989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:15:41.192312  398989 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336 for IP: 192.168.103.2
	I1210 06:15:41.192334  398989 certs.go:195] generating shared ca certs ...
	I1210 06:15:41.192367  398989 certs.go:227] acquiring lock for ca certs: {Name:mka90f54d579d39a8508aa46a6cef002ccad5d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:41.192505  398989 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key
	I1210 06:15:41.192546  398989 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key
	I1210 06:15:41.192557  398989 certs.go:257] generating profile certs ...
	I1210 06:15:41.192643  398989 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/client.key
	I1210 06:15:41.192694  398989 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/apiserver.key.75b93134
	I1210 06:15:41.192729  398989 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/proxy-client.key
	I1210 06:15:41.192855  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253.pem (1338 bytes)
	W1210 06:15:41.192897  398989 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253_empty.pem, impossibly tiny 0 bytes
	I1210 06:15:41.192910  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:15:41.192952  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:15:41.192986  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:15:41.193016  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem (1679 bytes)
	I1210 06:15:41.193074  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:15:41.193841  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:15:41.212216  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:15:41.230779  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:15:41.249215  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:15:41.273141  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1210 06:15:41.291653  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:15:41.308892  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:15:41.328983  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 06:15:41.348815  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253.pem --> /usr/share/ca-certificates/9253.pem (1338 bytes)
	I1210 06:15:41.369178  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /usr/share/ca-certificates/92532.pem (1708 bytes)
	I1210 06:15:41.390044  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:15:41.407887  398989 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:15:41.422822  398989 ssh_runner.go:195] Run: openssl version
	I1210 06:15:41.430217  398989 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:41.438931  398989 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:15:41.447682  398989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:41.451942  398989 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:41.451995  398989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:41.496117  398989 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:15:41.504580  398989 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9253.pem
	I1210 06:15:41.512960  398989 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9253.pem /etc/ssl/certs/9253.pem
	I1210 06:15:41.521564  398989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9253.pem
	I1210 06:15:41.525244  398989 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:37 /usr/share/ca-certificates/9253.pem
	I1210 06:15:41.525308  398989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9253.pem
	I1210 06:15:41.564172  398989 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:15:41.572852  398989 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/92532.pem
	I1210 06:15:41.580900  398989 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/92532.pem /etc/ssl/certs/92532.pem
	I1210 06:15:41.588301  398989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92532.pem
	I1210 06:15:41.592675  398989 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:37 /usr/share/ca-certificates/92532.pem
	I1210 06:15:41.592721  398989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92532.pem
	I1210 06:15:41.637108  398989 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:15:41.645490  398989 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:15:41.649879  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:15:41.690638  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:15:41.747836  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:15:41.800228  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:15:41.862694  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:15:41.914250  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:15:41.958747  398989 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-125336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:15:41.959041  398989 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:15:41.959166  398989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:15:41.998590  398989 cri.go:89] found id: "92cdc11606d33aee3d477bf6cbe4ab80332206fde18c217d524f557e526b0285"
	I1210 06:15:41.998610  398989 cri.go:89] found id: "2dded97e81369efefb822c9b0c8d6dfd3bbd053fe93054ad3a81cdce1d76f368"
	I1210 06:15:41.998616  398989 cri.go:89] found id: "355b450a39b31a387be491afe63facd495d64617f6108b0a4b1b5123f1758d16"
	I1210 06:15:41.998621  398989 cri.go:89] found id: "4492dccb6c585536103a7303143f56d37e8a4fcd9cebebf3e45723b510e06b9d"
	I1210 06:15:41.998625  398989 cri.go:89] found id: ""
	I1210 06:15:41.998665  398989 ssh_runner.go:195] Run: sudo runc list -f json
	W1210 06:15:42.012230  398989 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:15:42Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:15:42.012308  398989 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:15:42.023047  398989 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:15:42.023062  398989 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:15:42.023133  398989 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:15:42.032028  398989 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:15:42.033327  398989 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-125336" does not appear in /home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:15:42.034299  398989 kubeconfig.go:62] /home/jenkins/minikube-integration/22094-5725/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-125336" cluster setting kubeconfig missing "default-k8s-diff-port-125336" context setting]
	I1210 06:15:42.035703  398989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/kubeconfig: {Name:mkfa60e97179780abb80cddf99075aed36301884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:42.037888  398989 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:15:42.047350  398989 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1210 06:15:42.047375  398989 kubeadm.go:602] duration metric: took 24.306597ms to restartPrimaryControlPlane
	I1210 06:15:42.047383  398989 kubeadm.go:403] duration metric: took 88.644178ms to StartCluster
	I1210 06:15:42.047399  398989 settings.go:142] acquiring lock: {Name:mk8c38e27b37253ca8cb2a2adf6342f0db270902 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:42.047471  398989 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:15:42.049858  398989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/kubeconfig: {Name:mkfa60e97179780abb80cddf99075aed36301884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:42.050141  398989 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:15:42.050363  398989 config.go:182] Loaded profile config "default-k8s-diff-port-125336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:15:42.050409  398989 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:15:42.050484  398989 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-125336"
	I1210 06:15:42.050502  398989 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-125336"
	W1210 06:15:42.050511  398989 addons.go:248] addon storage-provisioner should already be in state true
	I1210 06:15:42.050535  398989 host.go:66] Checking if "default-k8s-diff-port-125336" exists ...
	I1210 06:15:42.051015  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:42.051175  398989 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-125336"
	I1210 06:15:42.051191  398989 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-125336"
	W1210 06:15:42.051199  398989 addons.go:248] addon dashboard should already be in state true
	I1210 06:15:42.051223  398989 host.go:66] Checking if "default-k8s-diff-port-125336" exists ...
	I1210 06:15:42.051559  398989 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-125336"
	I1210 06:15:42.051618  398989 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-125336"
	I1210 06:15:42.051583  398989 out.go:179] * Verifying Kubernetes components...
	I1210 06:15:42.051661  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:42.051950  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:42.053296  398989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:15:42.082195  398989 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:15:42.082199  398989 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 06:15:42.083378  398989 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:15:42.083403  398989 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1210 06:15:37.646711  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	W1210 06:15:39.648042  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	W1210 06:15:41.648596  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	I1210 06:15:42.083414  398989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:15:42.083554  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:42.086520  398989 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-125336"
	W1210 06:15:42.086542  398989 addons.go:248] addon default-storageclass should already be in state true
	I1210 06:15:42.086569  398989 host.go:66] Checking if "default-k8s-diff-port-125336" exists ...
	I1210 06:15:42.086807  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 06:15:42.086824  398989 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 06:15:42.086879  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:42.088501  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:42.127971  398989 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:15:42.127995  398989 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:15:42.128058  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:42.131157  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:42.131148  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:42.163643  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:42.238425  398989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:15:42.261214  398989 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-125336" to be "Ready" ...
	I1210 06:15:42.266856  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 06:15:42.266878  398989 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 06:15:42.273292  398989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:15:42.296500  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 06:15:42.296642  398989 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 06:15:42.316168  398989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:15:42.322727  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 06:15:42.322747  398989 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 06:15:42.342110  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 06:15:42.342132  398989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 06:15:42.364017  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 06:15:42.364037  398989 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 06:15:42.383601  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 06:15:42.383628  398989 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 06:15:42.400222  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 06:15:42.400267  398989 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 06:15:42.413822  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 06:15:42.413841  398989 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 06:15:42.428985  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:15:42.429002  398989 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 06:15:42.445006  398989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:15:43.730716  398989 node_ready.go:49] node "default-k8s-diff-port-125336" is "Ready"
	I1210 06:15:43.730761  398989 node_ready.go:38] duration metric: took 1.469517861s for node "default-k8s-diff-port-125336" to be "Ready" ...
	I1210 06:15:43.730780  398989 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:15:43.730833  398989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:15:44.295467  398989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.022145188s)
	I1210 06:15:44.295527  398989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.979317742s)
	I1210 06:15:44.295605  398989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.850565953s)
	I1210 06:15:44.295731  398989 api_server.go:72] duration metric: took 2.245559846s to wait for apiserver process to appear ...
	I1210 06:15:44.295748  398989 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:15:44.295770  398989 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1210 06:15:44.297453  398989 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-125336 addons enable metrics-server
	
	I1210 06:15:44.301230  398989 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:44.301258  398989 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:44.307227  398989 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1210 06:15:44.147036  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	I1210 06:15:46.146566  389191 pod_ready.go:94] pod "coredns-66bc5c9577-8xwfc" is "Ready"
	I1210 06:15:46.146592  389191 pod_ready.go:86] duration metric: took 37.005340048s for pod "coredns-66bc5c9577-8xwfc" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:46.149120  389191 pod_ready.go:83] waiting for pod "etcd-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:46.152937  389191 pod_ready.go:94] pod "etcd-embed-certs-028500" is "Ready"
	I1210 06:15:46.152956  389191 pod_ready.go:86] duration metric: took 3.81638ms for pod "etcd-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:46.154886  389191 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:46.158540  389191 pod_ready.go:94] pod "kube-apiserver-embed-certs-028500" is "Ready"
	I1210 06:15:46.158566  389191 pod_ready.go:86] duration metric: took 3.65933ms for pod "kube-apiserver-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:46.160461  389191 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:46.345207  389191 pod_ready.go:94] pod "kube-controller-manager-embed-certs-028500" is "Ready"
	I1210 06:15:46.345232  389191 pod_ready.go:86] duration metric: took 184.75138ms for pod "kube-controller-manager-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:46.545176  389191 pod_ready.go:83] waiting for pod "kube-proxy-sr7kh" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:46.945367  389191 pod_ready.go:94] pod "kube-proxy-sr7kh" is "Ready"
	I1210 06:15:46.945391  389191 pod_ready.go:86] duration metric: took 400.193359ms for pod "kube-proxy-sr7kh" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:47.145257  389191 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:47.544937  389191 pod_ready.go:94] pod "kube-scheduler-embed-certs-028500" is "Ready"
	I1210 06:15:47.544958  389191 pod_ready.go:86] duration metric: took 399.673562ms for pod "kube-scheduler-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:47.544969  389191 pod_ready.go:40] duration metric: took 38.406618977s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:15:47.594190  389191 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1210 06:15:47.595325  389191 out.go:179] * Done! kubectl is now configured to use "embed-certs-028500" cluster and "default" namespace by default
	I1210 06:15:44.308766  398989 addons.go:530] duration metric: took 2.258355424s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1210 06:15:44.795874  398989 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1210 06:15:44.800857  398989 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:44.800883  398989 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:45.296231  398989 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1210 06:15:45.301136  398989 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1210 06:15:45.302322  398989 api_server.go:141] control plane version: v1.34.3
	I1210 06:15:45.302347  398989 api_server.go:131] duration metric: took 1.006591687s to wait for apiserver health ...
	I1210 06:15:45.302357  398989 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:15:45.306315  398989 system_pods.go:59] 8 kube-system pods found
	I1210 06:15:45.306352  398989 system_pods.go:61] "coredns-66bc5c9577-gkk6m" [0b83f27c-1359-488f-bf61-c716f522dfad] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:15:45.306367  398989 system_pods.go:61] "etcd-default-k8s-diff-port-125336" [afbeb479-99ed-44cd-b9c3-cda0c638c270] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:15:45.306382  398989 system_pods.go:61] "kindnet-lfds9" [14d4cc08-bd99-41e5-a772-b5197e8b16b6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1210 06:15:45.306398  398989 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-125336" [12a3028f-5f91-4217-bff2-527a5c4a0b4d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:15:45.306414  398989 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-125336" [ee445b76-6256-4d08-a12d-c392acecca93] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:15:45.306429  398989 system_pods.go:61] "kube-proxy-mw5sp" [94c4f93c-3851-4ed9-ae3b-7900e64abf9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 06:15:45.306439  398989 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-125336" [f045b3cd-f095-44a0-9735-47a085eb5d83] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:15:45.306446  398989 system_pods.go:61] "storage-provisioner" [d31f981a-faff-40fd-87cd-c2e5b25f8e2a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:15:45.306457  398989 system_pods.go:74] duration metric: took 4.090626ms to wait for pod list to return data ...
	I1210 06:15:45.306469  398989 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:15:45.309065  398989 default_sa.go:45] found service account: "default"
	I1210 06:15:45.309111  398989 default_sa.go:55] duration metric: took 2.635327ms for default service account to be created ...
	I1210 06:15:45.309121  398989 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 06:15:45.312161  398989 system_pods.go:86] 8 kube-system pods found
	I1210 06:15:45.312188  398989 system_pods.go:89] "coredns-66bc5c9577-gkk6m" [0b83f27c-1359-488f-bf61-c716f522dfad] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:15:45.312199  398989 system_pods.go:89] "etcd-default-k8s-diff-port-125336" [afbeb479-99ed-44cd-b9c3-cda0c638c270] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:15:45.312211  398989 system_pods.go:89] "kindnet-lfds9" [14d4cc08-bd99-41e5-a772-b5197e8b16b6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1210 06:15:45.312295  398989 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-125336" [12a3028f-5f91-4217-bff2-527a5c4a0b4d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:15:45.312334  398989 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-125336" [ee445b76-6256-4d08-a12d-c392acecca93] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:15:45.312348  398989 system_pods.go:89] "kube-proxy-mw5sp" [94c4f93c-3851-4ed9-ae3b-7900e64abf9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 06:15:45.312364  398989 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-125336" [f045b3cd-f095-44a0-9735-47a085eb5d83] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:15:45.312380  398989 system_pods.go:89] "storage-provisioner" [d31f981a-faff-40fd-87cd-c2e5b25f8e2a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:15:45.312393  398989 system_pods.go:126] duration metric: took 3.26398ms to wait for k8s-apps to be running ...
	I1210 06:15:45.312421  398989 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 06:15:45.312464  398989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:15:45.330746  398989 system_svc.go:56] duration metric: took 18.317711ms WaitForService to wait for kubelet
	I1210 06:15:45.330808  398989 kubeadm.go:587] duration metric: took 3.280637081s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:15:45.330849  398989 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:15:45.333665  398989 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:15:45.333690  398989 node_conditions.go:123] node cpu capacity is 8
	I1210 06:15:45.333707  398989 node_conditions.go:105] duration metric: took 2.852028ms to run NodePressure ...
	I1210 06:15:45.333720  398989 start.go:242] waiting for startup goroutines ...
	I1210 06:15:45.333730  398989 start.go:247] waiting for cluster config update ...
	I1210 06:15:45.333744  398989 start.go:256] writing updated cluster config ...
	I1210 06:15:45.334096  398989 ssh_runner.go:195] Run: rm -f paused
	I1210 06:15:45.338120  398989 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:15:45.341568  398989 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gkk6m" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 06:15:47.347196  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:15:49.347509  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:15:51.348265  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:15:53.847175  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:15:56.347151  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:15:58.846930  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:16:01.346818  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:16:03.847144  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 10 06:15:29 embed-certs-028500 crio[562]: time="2025-12-10T06:15:29.258923485Z" level=info msg="Started container" PID=1738 containerID=94724648bee22b0fbd298bcc0ff5ea7683738e8ea83f276c4eb8ec5ee8b83070 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jnrs7/dashboard-metrics-scraper id=d8f315ee-6afe-438f-9438-5ddf38e659fa name=/runtime.v1.RuntimeService/StartContainer sandboxID=6d8aaedfc177ff041f707cf9b683d3234b2ef963e9b04b428bad88ad7f5cb2b6
	Dec 10 06:15:29 embed-certs-028500 crio[562]: time="2025-12-10T06:15:29.314868778Z" level=info msg="Removing container: d1e6a61cc53e20ffbe52b6e31cf501b405374e361f3e7c39f3f61e2cb1ce5e35" id=17a2f301-463d-4953-be79-dd6269d7d3e3 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:15:29 embed-certs-028500 crio[562]: time="2025-12-10T06:15:29.323784022Z" level=info msg="Removed container d1e6a61cc53e20ffbe52b6e31cf501b405374e361f3e7c39f3f61e2cb1ce5e35: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jnrs7/dashboard-metrics-scraper" id=17a2f301-463d-4953-be79-dd6269d7d3e3 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:15:39 embed-certs-028500 crio[562]: time="2025-12-10T06:15:39.343972822Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6caf4847-f48d-43d8-88df-fdbcb2f3b20b name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:15:39 embed-certs-028500 crio[562]: time="2025-12-10T06:15:39.344937361Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=efb8d708-9c98-4d15-bb46-b5efa0f8da1e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:15:39 embed-certs-028500 crio[562]: time="2025-12-10T06:15:39.346061869Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=c79d6d2b-e4ea-4e17-919f-ca25499a4101 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:15:39 embed-certs-028500 crio[562]: time="2025-12-10T06:15:39.346229232Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:39 embed-certs-028500 crio[562]: time="2025-12-10T06:15:39.350852273Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:39 embed-certs-028500 crio[562]: time="2025-12-10T06:15:39.351038438Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/d4033eec9da64f5495301275318e7212e188cb60b543413e5107552100c1a7fc/merged/etc/passwd: no such file or directory"
	Dec 10 06:15:39 embed-certs-028500 crio[562]: time="2025-12-10T06:15:39.351075013Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/d4033eec9da64f5495301275318e7212e188cb60b543413e5107552100c1a7fc/merged/etc/group: no such file or directory"
	Dec 10 06:15:39 embed-certs-028500 crio[562]: time="2025-12-10T06:15:39.351381854Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:39 embed-certs-028500 crio[562]: time="2025-12-10T06:15:39.381287032Z" level=info msg="Created container cb042d2d3ee4eed59556574dcf66edb5cd45105056d9d11b95949ca636d2b0bc: kube-system/storage-provisioner/storage-provisioner" id=c79d6d2b-e4ea-4e17-919f-ca25499a4101 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:15:39 embed-certs-028500 crio[562]: time="2025-12-10T06:15:39.382385039Z" level=info msg="Starting container: cb042d2d3ee4eed59556574dcf66edb5cd45105056d9d11b95949ca636d2b0bc" id=378e8f98-5716-4758-9c7f-ce9ed01ff68b name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:15:39 embed-certs-028500 crio[562]: time="2025-12-10T06:15:39.384322111Z" level=info msg="Started container" PID=1753 containerID=cb042d2d3ee4eed59556574dcf66edb5cd45105056d9d11b95949ca636d2b0bc description=kube-system/storage-provisioner/storage-provisioner id=378e8f98-5716-4758-9c7f-ce9ed01ff68b name=/runtime.v1.RuntimeService/StartContainer sandboxID=f3aa85a58f9eefeb67979e39d6c968cb5bcb0cd2a589fcfd1cb4839c1b3ad10a
	Dec 10 06:15:53 embed-certs-028500 crio[562]: time="2025-12-10T06:15:53.214876328Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1f20f08b-a1eb-4691-943b-b6a5e877e170 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:15:53 embed-certs-028500 crio[562]: time="2025-12-10T06:15:53.215889396Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c0fb6676-18c1-437d-b72c-71ba5db1355a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:15:53 embed-certs-028500 crio[562]: time="2025-12-10T06:15:53.217343625Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jnrs7/dashboard-metrics-scraper" id=7ea59f70-6474-4992-9446-c80d39007a8c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:15:53 embed-certs-028500 crio[562]: time="2025-12-10T06:15:53.217471479Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:53 embed-certs-028500 crio[562]: time="2025-12-10T06:15:53.224161953Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:53 embed-certs-028500 crio[562]: time="2025-12-10T06:15:53.22468427Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:15:53 embed-certs-028500 crio[562]: time="2025-12-10T06:15:53.265339183Z" level=info msg="Created container a0f5ccad99d1dd768b3fa89480e72005f8d6decc3ec657c87225f531c0fd9c53: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jnrs7/dashboard-metrics-scraper" id=7ea59f70-6474-4992-9446-c80d39007a8c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:15:53 embed-certs-028500 crio[562]: time="2025-12-10T06:15:53.26592932Z" level=info msg="Starting container: a0f5ccad99d1dd768b3fa89480e72005f8d6decc3ec657c87225f531c0fd9c53" id=1bc0cc74-285c-4631-a105-6a33379a3341 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:15:53 embed-certs-028500 crio[562]: time="2025-12-10T06:15:53.267949888Z" level=info msg="Started container" PID=1788 containerID=a0f5ccad99d1dd768b3fa89480e72005f8d6decc3ec657c87225f531c0fd9c53 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jnrs7/dashboard-metrics-scraper id=1bc0cc74-285c-4631-a105-6a33379a3341 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6d8aaedfc177ff041f707cf9b683d3234b2ef963e9b04b428bad88ad7f5cb2b6
	Dec 10 06:15:53 embed-certs-028500 crio[562]: time="2025-12-10T06:15:53.39013872Z" level=info msg="Removing container: 94724648bee22b0fbd298bcc0ff5ea7683738e8ea83f276c4eb8ec5ee8b83070" id=f1860647-1a44-40dd-bd81-dc19d0c756f4 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:15:53 embed-certs-028500 crio[562]: time="2025-12-10T06:15:53.399430812Z" level=info msg="Removed container 94724648bee22b0fbd298bcc0ff5ea7683738e8ea83f276c4eb8ec5ee8b83070: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jnrs7/dashboard-metrics-scraper" id=f1860647-1a44-40dd-bd81-dc19d0c756f4 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	a0f5ccad99d1d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   3                   6d8aaedfc177f       dashboard-metrics-scraper-6ffb444bf9-jnrs7   kubernetes-dashboard
	cb042d2d3ee4e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   f3aa85a58f9ee       storage-provisioner                          kube-system
	2fcc09c4dfe39       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   47 seconds ago      Running             kubernetes-dashboard        0                   6751fc436b742       kubernetes-dashboard-855c9754f9-vrlx4        kubernetes-dashboard
	26ad907be1f81       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   d6248c202cb59       busybox                                      default
	159ebdaee8f04       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           56 seconds ago      Running             coredns                     0                   206396284eafe       coredns-66bc5c9577-8xwfc                     kube-system
	eb4162f085aa8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   f3aa85a58f9ee       storage-provisioner                          kube-system
	7689ab2e3dacd       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago      Running             kindnet-cni                 0                   fdbf9a1b37fcf       kindnet-6gq2z                                kube-system
	b2ae79e89c55e       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                           56 seconds ago      Running             kube-proxy                  0                   cd9092b9fa4ea       kube-proxy-sr7kh                             kube-system
	8cb47732447e7       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                           59 seconds ago      Running             kube-controller-manager     0                   f6966c4f6afc6       kube-controller-manager-embed-certs-028500   kube-system
	9448aac68883a       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                           59 seconds ago      Running             kube-apiserver              0                   1b2f6d61a1335       kube-apiserver-embed-certs-028500            kube-system
	f02f944bc389e       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                           59 seconds ago      Running             kube-scheduler              0                   01d85574f631b       kube-scheduler-embed-certs-028500            kube-system
	6ef9ca2b457b0       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           59 seconds ago      Running             etcd                        0                   9d86535702480       etcd-embed-certs-028500                      kube-system
	
	
	==> coredns [159ebdaee8f047d1f4901272cd48b5afa5c4eb9b9ab0ff33ac677eda1288666c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42041 - 238 "HINFO IN 1820425727757802405.8973029253656249968. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.88352195s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-028500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-028500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602
	                    minikube.k8s.io/name=embed-certs-028500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_14_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:14:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-028500
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:15:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:15:48 +0000   Wed, 10 Dec 2025 06:14:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:15:48 +0000   Wed, 10 Dec 2025 06:14:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:15:48 +0000   Wed, 10 Dec 2025 06:14:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 06:15:48 +0000   Wed, 10 Dec 2025 06:14:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-028500
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                cff73820-6963-4ea9-ae17-4b15b6269bbe
	  Boot ID:                    b1b789e7-29ca-41f0-9541-8c4ef16372aa
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-8xwfc                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-embed-certs-028500                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-6gq2z                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-embed-certs-028500             250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-embed-certs-028500    200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-sr7kh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-embed-certs-028500             100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-jnrs7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-vrlx4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  114s               kubelet          Node embed-certs-028500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s               kubelet          Node embed-certs-028500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s               kubelet          Node embed-certs-028500 status is now: NodeHasSufficientPID
	  Normal  Starting                 114s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s               node-controller  Node embed-certs-028500 event: Registered Node embed-certs-028500 in Controller
	  Normal  NodeReady                95s                kubelet          Node embed-certs-028500 status is now: NodeReady
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)  kubelet          Node embed-certs-028500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)  kubelet          Node embed-certs-028500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)  kubelet          Node embed-certs-028500 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s                node-controller  Node embed-certs-028500 event: Registered Node embed-certs-028500 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e ac 6a 3a 10 14 08 06
	[  +0.000389] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e1 45 1e 59 dc 08 06
	[ +12.231886] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff aa b6 c3 b5 b8 e1 08 06
	[  +0.018522] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff f6 5b 96 ba 91 6c 08 06
	[Dec10 06:13] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 91 ba 36 9a ae 08 06
	[  +0.002987] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 7f a1 c5 f7 73 08 06
	[  +1.205570] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 04 b2 ab d7 49 08 06
	[  +4.623767] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 10 2d 23 5f e6 08 06
	[  +0.000315] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 5b 96 ba 91 6c 08 06
	[ +12.537493] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 fa d0 2a 46 66 08 06
	[  +0.000395] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 16 04 b2 ab d7 49 08 06
	[ +31.413502] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 1b 61 8f e3 57 08 06
	[  +0.000352] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fa 91 ba 36 9a ae 08 06
	
	
	==> etcd [6ef9ca2b457b0540ee957485c2781b7054801e8cedcfebc48356c9df7479410e] <==
	{"level":"warn","ts":"2025-12-10T06:15:06.919243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:06.933321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:06.939932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:06.946854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:06.954230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:06.960565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:06.967624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:06.974318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:06.980872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:06.988365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:06.995796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:07.001966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:07.009138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:07.018819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:07.025277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:07.033061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:07.039660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:07.046877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:07.053539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:07.060260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:07.066798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:07.083043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:07.089796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:07.096727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:07.163377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48822","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 06:16:04 up 58 min,  0 user,  load average: 4.06, 4.40, 2.95
	Linux embed-certs-028500 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7689ab2e3dacdba99303712f566c57a921880a70789c8f5a102d20e7f6731ab2] <==
	I1210 06:15:08.755335       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 06:15:08.847383       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1210 06:15:08.847552       1 main.go:148] setting mtu 1500 for CNI 
	I1210 06:15:08.847568       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 06:15:08.847592       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T06:15:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 06:15:09.050094       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 06:15:09.050155       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 06:15:09.050180       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 06:15:09.050387       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 06:15:09.450294       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 06:15:09.450325       1 metrics.go:72] Registering metrics
	I1210 06:15:09.450412       1 controller.go:711] "Syncing nftables rules"
	I1210 06:15:19.051006       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 06:15:19.051065       1 main.go:301] handling current node
	I1210 06:15:29.053843       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 06:15:29.053885       1 main.go:301] handling current node
	I1210 06:15:39.050175       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 06:15:39.050221       1 main.go:301] handling current node
	I1210 06:15:49.050254       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 06:15:49.050298       1 main.go:301] handling current node
	I1210 06:15:59.050949       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 06:15:59.050989       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9448aac68883a9dd13bef51e8981f7e636bdfe00fb0ac6083393a0705758776b] <==
	I1210 06:15:07.642886       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1210 06:15:07.642837       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1210 06:15:07.642969       1 aggregator.go:171] initial CRD sync complete...
	I1210 06:15:07.642977       1 autoregister_controller.go:144] Starting autoregister controller
	I1210 06:15:07.642983       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 06:15:07.642989       1 cache.go:39] Caches are synced for autoregister controller
	I1210 06:15:07.643187       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1210 06:15:07.643266       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1210 06:15:07.643628       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:15:07.648934       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1210 06:15:07.651107       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1210 06:15:07.680195       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:15:07.690848       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1210 06:15:07.899791       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 06:15:07.926945       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 06:15:07.944710       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:15:07.951609       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:15:07.958003       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 06:15:07.992513       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.202.85"}
	I1210 06:15:08.001042       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.248.201"}
	I1210 06:15:08.545989       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 06:15:11.333490       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 06:15:11.435009       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 06:15:11.584154       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:15:11.584156       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [8cb47732447e77b684b839f080aeb3be30b5387c9465db5c1669dcfea49925dd] <==
	I1210 06:15:10.942306       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1210 06:15:10.942361       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1210 06:15:10.942373       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1210 06:15:10.942380       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1210 06:15:10.944837       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1210 06:15:10.946031       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1210 06:15:10.948301       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1210 06:15:10.980745       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1210 06:15:10.980772       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1210 06:15:10.980819       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1210 06:15:10.980858       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1210 06:15:10.980888       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1210 06:15:10.980802       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1210 06:15:10.980904       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1210 06:15:10.981455       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1210 06:15:10.982122       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1210 06:15:10.982157       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1210 06:15:10.982165       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1210 06:15:10.983339       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1210 06:15:10.984511       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 06:15:10.995612       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 06:15:10.997787       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1210 06:15:10.998982       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1210 06:15:11.001196       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1210 06:15:11.008758       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [b2ae79e89c55ea1c76b0f7bf4d2c9feb4cd3888baf3cc33684b2ee43e27c3cfd] <==
	I1210 06:15:08.619158       1 server_linux.go:53] "Using iptables proxy"
	I1210 06:15:08.708398       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 06:15:08.809196       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 06:15:08.809227       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1210 06:15:08.809301       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:15:08.826395       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:15:08.826439       1 server_linux.go:132] "Using iptables Proxier"
	I1210 06:15:08.832164       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:15:08.832529       1 server.go:527] "Version info" version="v1.34.3"
	I1210 06:15:08.832559       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:15:08.834826       1 config.go:200] "Starting service config controller"
	I1210 06:15:08.834855       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:15:08.834913       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:15:08.834929       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:15:08.834953       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:15:08.834958       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:15:08.835130       1 config.go:309] "Starting node config controller"
	I1210 06:15:08.835149       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:15:08.835156       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:15:08.935073       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 06:15:08.935131       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 06:15:08.935231       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [f02f944bc389eec54d2261f9fd7c4019496559a482a7c7606927c07257c7d803] <==
	I1210 06:15:06.464907       1 serving.go:386] Generated self-signed cert in-memory
	W1210 06:15:07.590270       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1210 06:15:07.590414       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1210 06:15:07.590429       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1210 06:15:07.590439       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1210 06:15:07.608830       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1210 06:15:07.608909       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:15:07.611956       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:15:07.612018       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:15:07.612477       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 06:15:07.612628       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1210 06:15:07.712728       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 06:15:15 embed-certs-028500 kubelet[712]: E1210 06:15:15.275831     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jnrs7_kubernetes-dashboard(75431b9b-7240-4732-b3aa-7fd8576b7bc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jnrs7" podUID="75431b9b-7240-4732-b3aa-7fd8576b7bc8"
	Dec 10 06:15:15 embed-certs-028500 kubelet[712]: I1210 06:15:15.805730     712 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 10 06:15:16 embed-certs-028500 kubelet[712]: I1210 06:15:16.280143     712 scope.go:117] "RemoveContainer" containerID="d1e6a61cc53e20ffbe52b6e31cf501b405374e361f3e7c39f3f61e2cb1ce5e35"
	Dec 10 06:15:16 embed-certs-028500 kubelet[712]: E1210 06:15:16.280361     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jnrs7_kubernetes-dashboard(75431b9b-7240-4732-b3aa-7fd8576b7bc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jnrs7" podUID="75431b9b-7240-4732-b3aa-7fd8576b7bc8"
	Dec 10 06:15:17 embed-certs-028500 kubelet[712]: I1210 06:15:17.931032     712 scope.go:117] "RemoveContainer" containerID="d1e6a61cc53e20ffbe52b6e31cf501b405374e361f3e7c39f3f61e2cb1ce5e35"
	Dec 10 06:15:17 embed-certs-028500 kubelet[712]: E1210 06:15:17.931297     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jnrs7_kubernetes-dashboard(75431b9b-7240-4732-b3aa-7fd8576b7bc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jnrs7" podUID="75431b9b-7240-4732-b3aa-7fd8576b7bc8"
	Dec 10 06:15:18 embed-certs-028500 kubelet[712]: I1210 06:15:18.525904     712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vrlx4" podStartSLOduration=1.796583206 podStartE2EDuration="7.525880861s" podCreationTimestamp="2025-12-10 06:15:11 +0000 UTC" firstStartedPulling="2025-12-10 06:15:11.844987119 +0000 UTC m=+6.721647046" lastFinishedPulling="2025-12-10 06:15:17.574284767 +0000 UTC m=+12.450944701" observedRunningTime="2025-12-10 06:15:18.295876585 +0000 UTC m=+13.172536533" watchObservedRunningTime="2025-12-10 06:15:18.525880861 +0000 UTC m=+13.402540810"
	Dec 10 06:15:29 embed-certs-028500 kubelet[712]: I1210 06:15:29.214524     712 scope.go:117] "RemoveContainer" containerID="d1e6a61cc53e20ffbe52b6e31cf501b405374e361f3e7c39f3f61e2cb1ce5e35"
	Dec 10 06:15:29 embed-certs-028500 kubelet[712]: I1210 06:15:29.313585     712 scope.go:117] "RemoveContainer" containerID="d1e6a61cc53e20ffbe52b6e31cf501b405374e361f3e7c39f3f61e2cb1ce5e35"
	Dec 10 06:15:29 embed-certs-028500 kubelet[712]: I1210 06:15:29.313821     712 scope.go:117] "RemoveContainer" containerID="94724648bee22b0fbd298bcc0ff5ea7683738e8ea83f276c4eb8ec5ee8b83070"
	Dec 10 06:15:29 embed-certs-028500 kubelet[712]: E1210 06:15:29.314075     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jnrs7_kubernetes-dashboard(75431b9b-7240-4732-b3aa-7fd8576b7bc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jnrs7" podUID="75431b9b-7240-4732-b3aa-7fd8576b7bc8"
	Dec 10 06:15:37 embed-certs-028500 kubelet[712]: I1210 06:15:37.931355     712 scope.go:117] "RemoveContainer" containerID="94724648bee22b0fbd298bcc0ff5ea7683738e8ea83f276c4eb8ec5ee8b83070"
	Dec 10 06:15:37 embed-certs-028500 kubelet[712]: E1210 06:15:37.931568     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jnrs7_kubernetes-dashboard(75431b9b-7240-4732-b3aa-7fd8576b7bc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jnrs7" podUID="75431b9b-7240-4732-b3aa-7fd8576b7bc8"
	Dec 10 06:15:39 embed-certs-028500 kubelet[712]: I1210 06:15:39.343549     712 scope.go:117] "RemoveContainer" containerID="eb4162f085aa839793309bbf94205b0c5774dcaef613e64be1997d6345634f6f"
	Dec 10 06:15:53 embed-certs-028500 kubelet[712]: I1210 06:15:53.214463     712 scope.go:117] "RemoveContainer" containerID="94724648bee22b0fbd298bcc0ff5ea7683738e8ea83f276c4eb8ec5ee8b83070"
	Dec 10 06:15:53 embed-certs-028500 kubelet[712]: I1210 06:15:53.388843     712 scope.go:117] "RemoveContainer" containerID="94724648bee22b0fbd298bcc0ff5ea7683738e8ea83f276c4eb8ec5ee8b83070"
	Dec 10 06:15:53 embed-certs-028500 kubelet[712]: I1210 06:15:53.389056     712 scope.go:117] "RemoveContainer" containerID="a0f5ccad99d1dd768b3fa89480e72005f8d6decc3ec657c87225f531c0fd9c53"
	Dec 10 06:15:53 embed-certs-028500 kubelet[712]: E1210 06:15:53.389269     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jnrs7_kubernetes-dashboard(75431b9b-7240-4732-b3aa-7fd8576b7bc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jnrs7" podUID="75431b9b-7240-4732-b3aa-7fd8576b7bc8"
	Dec 10 06:15:57 embed-certs-028500 kubelet[712]: I1210 06:15:57.931235     712 scope.go:117] "RemoveContainer" containerID="a0f5ccad99d1dd768b3fa89480e72005f8d6decc3ec657c87225f531c0fd9c53"
	Dec 10 06:15:57 embed-certs-028500 kubelet[712]: E1210 06:15:57.931479     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jnrs7_kubernetes-dashboard(75431b9b-7240-4732-b3aa-7fd8576b7bc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jnrs7" podUID="75431b9b-7240-4732-b3aa-7fd8576b7bc8"
	Dec 10 06:16:00 embed-certs-028500 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 06:16:00 embed-certs-028500 kubelet[712]: I1210 06:16:00.048698     712 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 10 06:16:00 embed-certs-028500 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 06:16:00 embed-certs-028500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:16:00 embed-certs-028500 systemd[1]: kubelet.service: Consumed 1.663s CPU time.
	
	
	==> kubernetes-dashboard [2fcc09c4dfe399e5ac6a0dfb0339ee598b36ca0347a95eef915bf614fb98b83d] <==
	2025/12/10 06:15:17 Using namespace: kubernetes-dashboard
	2025/12/10 06:15:17 Using in-cluster config to connect to apiserver
	2025/12/10 06:15:17 Using secret token for csrf signing
	2025/12/10 06:15:17 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/10 06:15:17 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/10 06:15:17 Successful initial request to the apiserver, version: v1.34.3
	2025/12/10 06:15:17 Generating JWE encryption key
	2025/12/10 06:15:17 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/10 06:15:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/10 06:15:17 Initializing JWE encryption key from synchronized object
	2025/12/10 06:15:17 Creating in-cluster Sidecar client
	2025/12/10 06:15:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 06:15:17 Serving insecurely on HTTP port: 9090
	2025/12/10 06:15:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 06:15:17 Starting overwatch
	
	
	==> storage-provisioner [cb042d2d3ee4eed59556574dcf66edb5cd45105056d9d11b95949ca636d2b0bc] <==
	I1210 06:15:39.398134       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 06:15:39.407414       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 06:15:39.407464       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1210 06:15:39.409978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:42.865123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:47.125581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:50.723888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:53.777893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:56.800177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:56.804374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:15:56.804512       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 06:15:56.804650       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-028500_72ab12d8-28e6-48ed-a0af-4022d4c83e4a!
	I1210 06:15:56.804651       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9f1f4ad0-e1fa-4611-8756-9fd0b611cf54", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-028500_72ab12d8-28e6-48ed-a0af-4022d4c83e4a became leader
	W1210 06:15:56.806570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:56.810267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:15:56.904902       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-028500_72ab12d8-28e6-48ed-a0af-4022d4c83e4a!
	W1210 06:15:58.813454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:15:58.817104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:16:00.819870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:16:00.823596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:16:02.827482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:16:02.832343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:16:04.835545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:16:04.839396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [eb4162f085aa839793309bbf94205b0c5774dcaef613e64be1997d6345634f6f] <==
	I1210 06:15:08.589212       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1210 06:15:38.591523       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-028500 -n embed-certs-028500
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-028500 -n embed-certs-028500: exit status 2 (317.623355ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-028500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (5.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-125336 --alsologtostderr -v=1
E1210 06:16:36.033435    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/kindnet-094798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:16:37.157617    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/auto-094798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-125336 --alsologtostderr -v=1: exit status 80 (2.382902119s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-125336 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:16:35.399868  408560 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:16:35.400127  408560 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:16:35.400136  408560 out.go:374] Setting ErrFile to fd 2...
	I1210 06:16:35.400141  408560 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:16:35.400348  408560 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 06:16:35.400574  408560 out.go:368] Setting JSON to false
	I1210 06:16:35.400590  408560 mustload.go:66] Loading cluster: default-k8s-diff-port-125336
	I1210 06:16:35.400915  408560 config.go:182] Loaded profile config "default-k8s-diff-port-125336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:16:35.401270  408560 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:16:35.418559  408560 host.go:66] Checking if "default-k8s-diff-port-125336" exists ...
	I1210 06:16:35.418777  408560 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:16:35.476141  408560 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-10 06:16:35.465578454 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:16:35.476888  408560 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765151505-21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765151505-21409-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-125336 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1210 06:16:35.478676  408560 out.go:179] * Pausing node default-k8s-diff-port-125336 ... 
	I1210 06:16:35.479727  408560 host.go:66] Checking if "default-k8s-diff-port-125336" exists ...
	I1210 06:16:35.479950  408560 ssh_runner.go:195] Run: systemctl --version
	I1210 06:16:35.479985  408560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:16:35.496883  408560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:16:35.588824  408560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:16:35.612020  408560 pause.go:52] kubelet running: true
	I1210 06:16:35.612107  408560 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:16:35.769696  408560 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:16:35.769829  408560 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:16:35.832066  408560 cri.go:89] found id: "cca3ae445cd55d0ae0acad0517846ebb38a9d80446a5793c2115b59a45a3c93f"
	I1210 06:16:35.832105  408560 cri.go:89] found id: "d008c80af528911395e273d86d6218ebc4d984547613f7aacea14e288dffe717"
	I1210 06:16:35.832110  408560 cri.go:89] found id: "4cae6db5d58d5f55296fdd99cb88edcd6d5f157201404a728337fd012e8f1b6e"
	I1210 06:16:35.832113  408560 cri.go:89] found id: "9adb58aed15d4ed422818a0c187aae20692217265ccf3d9f6007cd504c1d8982"
	I1210 06:16:35.832116  408560 cri.go:89] found id: "8ed0496d0be7e2940a2664370db02c5f77609ff39d181f5c13426a0ee6fa740b"
	I1210 06:16:35.832120  408560 cri.go:89] found id: "92cdc11606d33aee3d477bf6cbe4ab80332206fde18c217d524f557e526b0285"
	I1210 06:16:35.832122  408560 cri.go:89] found id: "2dded97e81369efefb822c9b0c8d6dfd3bbd053fe93054ad3a81cdce1d76f368"
	I1210 06:16:35.832125  408560 cri.go:89] found id: "355b450a39b31a387be491afe63facd495d64617f6108b0a4b1b5123f1758d16"
	I1210 06:16:35.832128  408560 cri.go:89] found id: "4492dccb6c585536103a7303143f56d37e8a4fcd9cebebf3e45723b510e06b9d"
	I1210 06:16:35.832133  408560 cri.go:89] found id: "62a08a36d08daed6e588bb1c0c295b57b19f2241aeb341608a013625741caae5"
	I1210 06:16:35.832136  408560 cri.go:89] found id: "164632a10922a2106f042cad684065136ba79e69def7383698535847ea79adde"
	I1210 06:16:35.832139  408560 cri.go:89] found id: ""
	I1210 06:16:35.832176  408560 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:16:35.843300  408560 retry.go:31] will retry after 172.394019ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:16:35Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:16:36.016756  408560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:16:36.029158  408560 pause.go:52] kubelet running: false
	I1210 06:16:36.029208  408560 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:16:36.163576  408560 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:16:36.163659  408560 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:16:36.224854  408560 cri.go:89] found id: "cca3ae445cd55d0ae0acad0517846ebb38a9d80446a5793c2115b59a45a3c93f"
	I1210 06:16:36.224881  408560 cri.go:89] found id: "d008c80af528911395e273d86d6218ebc4d984547613f7aacea14e288dffe717"
	I1210 06:16:36.224888  408560 cri.go:89] found id: "4cae6db5d58d5f55296fdd99cb88edcd6d5f157201404a728337fd012e8f1b6e"
	I1210 06:16:36.224892  408560 cri.go:89] found id: "9adb58aed15d4ed422818a0c187aae20692217265ccf3d9f6007cd504c1d8982"
	I1210 06:16:36.224895  408560 cri.go:89] found id: "8ed0496d0be7e2940a2664370db02c5f77609ff39d181f5c13426a0ee6fa740b"
	I1210 06:16:36.224898  408560 cri.go:89] found id: "92cdc11606d33aee3d477bf6cbe4ab80332206fde18c217d524f557e526b0285"
	I1210 06:16:36.224901  408560 cri.go:89] found id: "2dded97e81369efefb822c9b0c8d6dfd3bbd053fe93054ad3a81cdce1d76f368"
	I1210 06:16:36.224904  408560 cri.go:89] found id: "355b450a39b31a387be491afe63facd495d64617f6108b0a4b1b5123f1758d16"
	I1210 06:16:36.224907  408560 cri.go:89] found id: "4492dccb6c585536103a7303143f56d37e8a4fcd9cebebf3e45723b510e06b9d"
	I1210 06:16:36.224935  408560 cri.go:89] found id: "62a08a36d08daed6e588bb1c0c295b57b19f2241aeb341608a013625741caae5"
	I1210 06:16:36.224943  408560 cri.go:89] found id: "164632a10922a2106f042cad684065136ba79e69def7383698535847ea79adde"
	I1210 06:16:36.224946  408560 cri.go:89] found id: ""
	I1210 06:16:36.224998  408560 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:16:36.236043  408560 retry.go:31] will retry after 357.600508ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:16:36Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:16:36.594678  408560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:16:36.607126  408560 pause.go:52] kubelet running: false
	I1210 06:16:36.607172  408560 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:16:36.742226  408560 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:16:36.742301  408560 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:16:36.804651  408560 cri.go:89] found id: "cca3ae445cd55d0ae0acad0517846ebb38a9d80446a5793c2115b59a45a3c93f"
	I1210 06:16:36.804674  408560 cri.go:89] found id: "d008c80af528911395e273d86d6218ebc4d984547613f7aacea14e288dffe717"
	I1210 06:16:36.804678  408560 cri.go:89] found id: "4cae6db5d58d5f55296fdd99cb88edcd6d5f157201404a728337fd012e8f1b6e"
	I1210 06:16:36.804681  408560 cri.go:89] found id: "9adb58aed15d4ed422818a0c187aae20692217265ccf3d9f6007cd504c1d8982"
	I1210 06:16:36.804683  408560 cri.go:89] found id: "8ed0496d0be7e2940a2664370db02c5f77609ff39d181f5c13426a0ee6fa740b"
	I1210 06:16:36.804691  408560 cri.go:89] found id: "92cdc11606d33aee3d477bf6cbe4ab80332206fde18c217d524f557e526b0285"
	I1210 06:16:36.804694  408560 cri.go:89] found id: "2dded97e81369efefb822c9b0c8d6dfd3bbd053fe93054ad3a81cdce1d76f368"
	I1210 06:16:36.804697  408560 cri.go:89] found id: "355b450a39b31a387be491afe63facd495d64617f6108b0a4b1b5123f1758d16"
	I1210 06:16:36.804700  408560 cri.go:89] found id: "4492dccb6c585536103a7303143f56d37e8a4fcd9cebebf3e45723b510e06b9d"
	I1210 06:16:36.804707  408560 cri.go:89] found id: "62a08a36d08daed6e588bb1c0c295b57b19f2241aeb341608a013625741caae5"
	I1210 06:16:36.804712  408560 cri.go:89] found id: "164632a10922a2106f042cad684065136ba79e69def7383698535847ea79adde"
	I1210 06:16:36.804716  408560 cri.go:89] found id: ""
	I1210 06:16:36.804769  408560 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:16:36.815738  408560 retry.go:31] will retry after 683.285611ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:16:36Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:16:37.499595  408560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:16:37.511865  408560 pause.go:52] kubelet running: false
	I1210 06:16:37.511913  408560 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:16:37.644756  408560 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:16:37.644840  408560 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:16:37.705218  408560 cri.go:89] found id: "cca3ae445cd55d0ae0acad0517846ebb38a9d80446a5793c2115b59a45a3c93f"
	I1210 06:16:37.705240  408560 cri.go:89] found id: "d008c80af528911395e273d86d6218ebc4d984547613f7aacea14e288dffe717"
	I1210 06:16:37.705244  408560 cri.go:89] found id: "4cae6db5d58d5f55296fdd99cb88edcd6d5f157201404a728337fd012e8f1b6e"
	I1210 06:16:37.705248  408560 cri.go:89] found id: "9adb58aed15d4ed422818a0c187aae20692217265ccf3d9f6007cd504c1d8982"
	I1210 06:16:37.705251  408560 cri.go:89] found id: "8ed0496d0be7e2940a2664370db02c5f77609ff39d181f5c13426a0ee6fa740b"
	I1210 06:16:37.705254  408560 cri.go:89] found id: "92cdc11606d33aee3d477bf6cbe4ab80332206fde18c217d524f557e526b0285"
	I1210 06:16:37.705257  408560 cri.go:89] found id: "2dded97e81369efefb822c9b0c8d6dfd3bbd053fe93054ad3a81cdce1d76f368"
	I1210 06:16:37.705260  408560 cri.go:89] found id: "355b450a39b31a387be491afe63facd495d64617f6108b0a4b1b5123f1758d16"
	I1210 06:16:37.705263  408560 cri.go:89] found id: "4492dccb6c585536103a7303143f56d37e8a4fcd9cebebf3e45723b510e06b9d"
	I1210 06:16:37.705275  408560 cri.go:89] found id: "62a08a36d08daed6e588bb1c0c295b57b19f2241aeb341608a013625741caae5"
	I1210 06:16:37.705278  408560 cri.go:89] found id: "164632a10922a2106f042cad684065136ba79e69def7383698535847ea79adde"
	I1210 06:16:37.705281  408560 cri.go:89] found id: ""
	I1210 06:16:37.705316  408560 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:16:37.718486  408560 out.go:203] 
	W1210 06:16:37.719642  408560 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:16:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:16:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 06:16:37.719663  408560 out.go:285] * 
	* 
	W1210 06:16:37.723635  408560 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:16:37.724659  408560 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-125336 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-125336
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-125336:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2b7aea94b35697845da2f4c16e920629381627ad8fcce3f7bf5029e3a85cdf22",
	        "Created": "2025-12-10T06:14:12.606946513Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 399194,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:15:34.491908876Z",
	            "FinishedAt": "2025-12-10T06:15:33.349641769Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/2b7aea94b35697845da2f4c16e920629381627ad8fcce3f7bf5029e3a85cdf22/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2b7aea94b35697845da2f4c16e920629381627ad8fcce3f7bf5029e3a85cdf22/hostname",
	        "HostsPath": "/var/lib/docker/containers/2b7aea94b35697845da2f4c16e920629381627ad8fcce3f7bf5029e3a85cdf22/hosts",
	        "LogPath": "/var/lib/docker/containers/2b7aea94b35697845da2f4c16e920629381627ad8fcce3f7bf5029e3a85cdf22/2b7aea94b35697845da2f4c16e920629381627ad8fcce3f7bf5029e3a85cdf22-json.log",
	        "Name": "/default-k8s-diff-port-125336",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-125336:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-125336",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2b7aea94b35697845da2f4c16e920629381627ad8fcce3f7bf5029e3a85cdf22",
	                "LowerDir": "/var/lib/docker/overlay2/eee672556c7e645ad7270e0982a18173816f8e37df04d4f2836ca903314bd268-init/diff:/var/lib/docker/overlay2/b62e2f8db4877fd6b32453256d2aeab173581bfdfbed6c87a5c3b6dd49dbb983/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eee672556c7e645ad7270e0982a18173816f8e37df04d4f2836ca903314bd268/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eee672556c7e645ad7270e0982a18173816f8e37df04d4f2836ca903314bd268/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eee672556c7e645ad7270e0982a18173816f8e37df04d4f2836ca903314bd268/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-125336",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-125336/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-125336",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-125336",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-125336",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f28bc84dd74ed2eac0658cae2f7cf7483c2ce290d0ef9abc8468a25bedd38574",
	            "SandboxKey": "/var/run/docker/netns/f28bc84dd74e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-125336": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6dcc364cf8d2e6fffb8ab01503e1fba4cf2ae27c41034eeff5b62eed98af1ff5",
	                    "EndpointID": "5011876e8acd6ec08b6dd9bdbc1413b661254c8bc35f7519eb83a791206ba16d",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "d2:2b:53:d8:3f:ed",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-125336",
	                        "2b7aea94b356"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-125336 -n default-k8s-diff-port-125336
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-125336 -n default-k8s-diff-port-125336: exit status 2 (305.184053ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-125336 logs -n 25
E1210 06:16:38.595462    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/kindnet-094798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ start   │ -p newest-cni-218688 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable dashboard -p embed-certs-028500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ start   │ -p embed-certs-028500 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-125336 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-125336 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable metrics-server -p newest-cni-218688 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ stop    │ -p newest-cni-218688 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable dashboard -p newest-cni-218688 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ start   │ -p newest-cni-218688 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-125336 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ start   │ -p default-k8s-diff-port-125336 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:16 UTC │
	│ image   │ newest-cni-218688 image list --format=json                                                                                                                                                                                                         │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ pause   │ -p newest-cni-218688 --alsologtostderr -v=1                                                                                                                                                                                                        │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ image   │ no-preload-468539 image list --format=json                                                                                                                                                                                                         │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ pause   │ -p no-preload-468539 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ delete  │ -p newest-cni-218688                                                                                                                                                                                                                               │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ delete  │ -p newest-cni-218688                                                                                                                                                                                                                               │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ delete  │ -p no-preload-468539                                                                                                                                                                                                                               │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ delete  │ -p no-preload-468539                                                                                                                                                                                                                               │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ image   │ embed-certs-028500 image list --format=json                                                                                                                                                                                                        │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ pause   │ -p embed-certs-028500 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ delete  │ -p embed-certs-028500                                                                                                                                                                                                                              │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:16 UTC │ 10 Dec 25 06:16 UTC │
	│ delete  │ -p embed-certs-028500                                                                                                                                                                                                                              │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:16 UTC │ 10 Dec 25 06:16 UTC │
	│ image   │ default-k8s-diff-port-125336 image list --format=json                                                                                                                                                                                              │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:16 UTC │ 10 Dec 25 06:16 UTC │
	│ pause   │ -p default-k8s-diff-port-125336 --alsologtostderr -v=1                                                                                                                                                                                             │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:16 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:15:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:15:34.136263  398989 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:15:34.136365  398989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:15:34.136370  398989 out.go:374] Setting ErrFile to fd 2...
	I1210 06:15:34.136374  398989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:15:34.136589  398989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 06:15:34.137019  398989 out.go:368] Setting JSON to false
	I1210 06:15:34.138324  398989 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3478,"bootTime":1765343856,"procs":474,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:15:34.138383  398989 start.go:143] virtualization: kvm guest
	I1210 06:15:34.140369  398989 out.go:179] * [default-k8s-diff-port-125336] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:15:34.141455  398989 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:15:34.141495  398989 notify.go:221] Checking for updates...
	I1210 06:15:34.144149  398989 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:15:34.145219  398989 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:15:34.146212  398989 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube
	I1210 06:15:34.147189  398989 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:15:34.148570  398989 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:15:34.150487  398989 config.go:182] Loaded profile config "default-k8s-diff-port-125336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:15:34.151311  398989 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:15:34.181230  398989 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:15:34.181357  398989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:15:34.246485  398989 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 06:15:34.23498397 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:15:34.246649  398989 docker.go:319] overlay module found
	I1210 06:15:34.248892  398989 out.go:179] * Using the docker driver based on existing profile
	I1210 06:15:34.250044  398989 start.go:309] selected driver: docker
	I1210 06:15:34.250071  398989 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-125336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:15:34.250210  398989 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:15:34.250813  398989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:15:34.316341  398989 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 06:15:34.305292083 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:15:34.316682  398989 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:15:34.316710  398989 cni.go:84] Creating CNI manager for ""
	I1210 06:15:34.316776  398989 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:15:34.316830  398989 start.go:353] cluster config:
	{Name:default-k8s-diff-port-125336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:15:34.318321  398989 out.go:179] * Starting "default-k8s-diff-port-125336" primary control-plane node in "default-k8s-diff-port-125336" cluster
	I1210 06:15:34.319196  398989 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:15:34.320175  398989 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:15:34.321155  398989 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 06:15:34.321256  398989 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	W1210 06:15:34.344393  398989 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1210 06:15:34.347229  398989 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:15:34.347250  398989 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 06:15:34.430385  398989 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1210 06:15:34.430536  398989 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/config.json ...
	I1210 06:15:34.430685  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:34.430831  398989 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:15:34.430871  398989 start.go:360] acquireMachinesLock for default-k8s-diff-port-125336: {Name:mk1b9a5beba896eecc2201d27beab95b8159d676 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.430953  398989 start.go:364] duration metric: took 37.573µs to acquireMachinesLock for "default-k8s-diff-port-125336"
	I1210 06:15:34.430971  398989 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:15:34.430976  398989 fix.go:54] fixHost starting: 
	I1210 06:15:34.431250  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:34.454438  398989 fix.go:112] recreateIfNeeded on default-k8s-diff-port-125336: state=Stopped err=<nil>
	W1210 06:15:34.454482  398989 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:15:33.023453  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 06:15:33.023497  396996 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 06:15:33.023579  396996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:33.044470  396996 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:15:33.044498  396996 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:15:33.044561  396996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:33.055221  396996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:33.060071  396996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:33.070394  396996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:33.143159  396996 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:15:33.157435  396996 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:15:33.157507  396996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:15:33.170632  396996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:15:33.171889  396996 api_server.go:72] duration metric: took 184.694932ms to wait for apiserver process to appear ...
	I1210 06:15:33.171914  396996 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:15:33.171932  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:33.175983  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 06:15:33.176026  396996 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 06:15:33.187123  396996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:15:33.192327  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 06:15:33.192345  396996 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 06:15:33.208241  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 06:15:33.208263  396996 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 06:15:33.223466  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 06:15:33.223489  396996 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 06:15:33.239352  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 06:15:33.239373  396996 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 06:15:33.254731  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 06:15:33.254747  396996 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 06:15:33.268149  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 06:15:33.268164  396996 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 06:15:33.281962  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 06:15:33.281981  396996 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 06:15:33.294762  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:15:33.294777  396996 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 06:15:33.308261  396996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:15:34.066152  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 06:15:34.066176  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 06:15:34.066192  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:34.079065  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 06:15:34.079117  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 06:15:34.172751  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:34.179376  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:34.179407  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:34.672823  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:34.677978  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:34.678023  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:34.680262  396996 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.509569955s)
	I1210 06:15:34.680319  396996 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.493167455s)
	I1210 06:15:34.680472  396996 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.372172224s)
	I1210 06:15:34.684547  396996 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-218688 addons enable metrics-server
	
	I1210 06:15:34.693826  396996 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1210 06:15:34.695479  396996 addons.go:530] duration metric: took 1.708260214s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1210 06:15:35.172871  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:35.178128  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:35.178152  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:35.672391  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:35.676418  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1210 06:15:35.677341  396996 api_server.go:141] control plane version: v1.35.0-rc.1
	I1210 06:15:35.677363  396996 api_server.go:131] duration metric: took 2.505442988s to wait for apiserver health ...
	I1210 06:15:35.677373  396996 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:15:35.680615  396996 system_pods.go:59] 8 kube-system pods found
	I1210 06:15:35.680642  396996 system_pods.go:61] "coredns-7d764666f9-44pd7" [59f9ee36-231a-4116-a88e-60d48b054690] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1210 06:15:35.680651  396996 system_pods.go:61] "etcd-newest-cni-218688" [c27a2601-2917-44f3-966c-b554d5b92c02] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:15:35.680657  396996 system_pods.go:61] "kindnet-n75st" [33becf6b-71b4-4682-81bc-c41d280389e3] Running
	I1210 06:15:35.680665  396996 system_pods.go:61] "kube-apiserver-newest-cni-218688" [a423257c-9365-4560-865a-9de59f0aafeb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:15:35.680674  396996 system_pods.go:61] "kube-controller-manager-newest-cni-218688" [5a19eab1-194c-4d33-9aa6-5cce8ba87a10] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:15:35.680682  396996 system_pods.go:61] "kube-proxy-tlj9s" [3ff684af-caff-4db8-991a-8ba99fe5f326] Running
	I1210 06:15:35.680687  396996 system_pods.go:61] "kube-scheduler-newest-cni-218688" [8063cc2c-8c98-4490-94af-1613e4881229] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:15:35.680698  396996 system_pods.go:61] "storage-provisioner" [a10bfb27-694c-4654-a067-8f36fe743de7] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1210 06:15:35.680705  396996 system_pods.go:74] duration metric: took 3.328176ms to wait for pod list to return data ...
	I1210 06:15:35.680714  396996 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:15:35.682837  396996 default_sa.go:45] found service account: "default"
	I1210 06:15:35.682855  396996 default_sa.go:55] duration metric: took 2.134837ms for default service account to be created ...
	I1210 06:15:35.682865  396996 kubeadm.go:587] duration metric: took 2.695675575s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 06:15:35.682879  396996 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:15:35.684913  396996 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:15:35.684939  396996 node_conditions.go:123] node cpu capacity is 8
	I1210 06:15:35.684951  396996 node_conditions.go:105] duration metric: took 2.068174ms to run NodePressure ...
	I1210 06:15:35.684962  396996 start.go:242] waiting for startup goroutines ...
	I1210 06:15:35.684968  396996 start.go:247] waiting for cluster config update ...
	I1210 06:15:35.684977  396996 start.go:256] writing updated cluster config ...
	I1210 06:15:35.685255  396996 ssh_runner.go:195] Run: rm -f paused
	I1210 06:15:35.731197  396996 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-rc.1 (minor skew: 1)
	I1210 06:15:35.733185  396996 out.go:179] * Done! kubectl is now configured to use "newest-cni-218688" cluster and "default" namespace by default
	W1210 06:15:33.147258  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	W1210 06:15:35.148317  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	I1210 06:15:34.458179  398989 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-125336" ...
	I1210 06:15:34.458256  398989 cli_runner.go:164] Run: docker start default-k8s-diff-port-125336
	I1210 06:15:34.606122  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:34.751260  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:34.755727  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:34.772295  398989 kic.go:430] container "default-k8s-diff-port-125336" state is running.
	I1210 06:15:34.772778  398989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-125336
	I1210 06:15:34.795691  398989 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/config.json ...
	I1210 06:15:34.795975  398989 machine.go:94] provisionDockerMachine start ...
	I1210 06:15:34.796067  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:34.815579  398989 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:34.815958  398989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1210 06:15:34.815979  398989 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:15:34.816656  398989 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48068->127.0.0.1:33138: read: connection reset by peer
	I1210 06:15:34.895700  398989 cache.go:107] acquiring lock: {Name:mk0763a50664c56b0862900e71862307cba94d4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895740  398989 cache.go:107] acquiring lock: {Name:mkdd768341d1a3481ecaec697219b32d4a715834 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895735  398989 cache.go:107] acquiring lock: {Name:mkd670cede0997c7eb0e9bd388a82e1cb2741031 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895776  398989 cache.go:107] acquiring lock: {Name:mk4d792f4bac33dc8779d7cc5ff40393c94e0ea4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895776  398989 cache.go:107] acquiring lock: {Name:mkc3a95f67321b2fa8faeb966829fb60cf65d25d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895817  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:15:34.895824  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:15:34.895828  398989 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 146.45µs
	I1210 06:15:34.895834  398989 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 128.77µs
	I1210 06:15:34.895694  398989 cache.go:107] acquiring lock: {Name:mkcb073544c2d92de0e0765e38c37b4f4d2ac46b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895843  398989 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:15:34.895840  398989 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:15:34.895852  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 exists
	I1210 06:15:34.895700  398989 cache.go:107] acquiring lock: {Name:mk4839690ba979036496a7cee1de2814aaad3bf0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895863  398989 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3" took 181.132µs
	I1210 06:15:34.895880  398989 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 succeeded
	I1210 06:15:34.895908  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 exists
	I1210 06:15:34.895899  398989 cache.go:107] acquiring lock: {Name:mk796942baeaa838a47daad2be5ca7532234da42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895924  398989 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3" took 255.105µs
	I1210 06:15:34.895929  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 exists
	I1210 06:15:34.895932  398989 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 succeeded
	I1210 06:15:34.895908  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 exists
	I1210 06:15:34.895944  398989 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3" took 265.291µs
	I1210 06:15:34.895951  398989 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3" took 211.334µs
	I1210 06:15:34.895966  398989 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 succeeded
	I1210 06:15:34.895972  398989 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 succeeded
	I1210 06:15:34.895982  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1210 06:15:34.895990  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1210 06:15:34.895996  398989 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 258.502µs
	I1210 06:15:34.895999  398989 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 139.654µs
	I1210 06:15:34.896008  398989 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1210 06:15:34.896011  398989 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1210 06:15:34.896019  398989 cache.go:87] Successfully saved all images to host disk.
	I1210 06:15:37.959177  398989 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-125336
	
	I1210 06:15:37.959204  398989 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-125336"
	I1210 06:15:37.959258  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:37.979224  398989 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:37.979665  398989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1210 06:15:37.979696  398989 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-125336 && echo "default-k8s-diff-port-125336" | sudo tee /etc/hostname
	I1210 06:15:38.128128  398989 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-125336
	
	I1210 06:15:38.128197  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:38.146305  398989 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:38.146620  398989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1210 06:15:38.146653  398989 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-125336' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-125336/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-125336' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:15:38.278124  398989 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:15:38.278149  398989 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-5725/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-5725/.minikube}
	I1210 06:15:38.278167  398989 ubuntu.go:190] setting up certificates
	I1210 06:15:38.278176  398989 provision.go:84] configureAuth start
	I1210 06:15:38.278222  398989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-125336
	I1210 06:15:38.296606  398989 provision.go:143] copyHostCerts
	I1210 06:15:38.296674  398989 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem, removing ...
	I1210 06:15:38.296692  398989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem
	I1210 06:15:38.296785  398989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem (1078 bytes)
	I1210 06:15:38.296919  398989 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem, removing ...
	I1210 06:15:38.296932  398989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem
	I1210 06:15:38.296972  398989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem (1123 bytes)
	I1210 06:15:38.297072  398989 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem, removing ...
	I1210 06:15:38.297098  398989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem
	I1210 06:15:38.297140  398989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem (1679 bytes)
	I1210 06:15:38.297233  398989 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-125336 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-125336 localhost minikube]
	I1210 06:15:38.401725  398989 provision.go:177] copyRemoteCerts
	I1210 06:15:38.401781  398989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:15:38.401814  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:38.419489  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:38.515784  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:15:38.532680  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1210 06:15:38.549493  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:15:38.565601  398989 provision.go:87] duration metric: took 287.41ms to configureAuth
	I1210 06:15:38.565627  398989 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:15:38.565820  398989 config.go:182] Loaded profile config "default-k8s-diff-port-125336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:15:38.565943  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:38.583842  398989 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:38.584037  398989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1210 06:15:38.584055  398989 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:15:38.911289  398989 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:15:38.911317  398989 machine.go:97] duration metric: took 4.115324474s to provisionDockerMachine
	I1210 06:15:38.911331  398989 start.go:293] postStartSetup for "default-k8s-diff-port-125336" (driver="docker")
	I1210 06:15:38.911344  398989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:15:38.911417  398989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:15:38.911463  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:38.932694  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:39.032024  398989 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:15:39.035849  398989 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:15:39.035874  398989 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:15:39.035883  398989 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/addons for local assets ...
	I1210 06:15:39.035933  398989 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/files for local assets ...
	I1210 06:15:39.036028  398989 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem -> 92532.pem in /etc/ssl/certs
	I1210 06:15:39.036160  398989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:15:39.044513  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:15:39.061424  398989 start.go:296] duration metric: took 150.067555ms for postStartSetup
	I1210 06:15:39.061507  398989 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:15:39.061554  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:39.080318  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:39.174412  398989 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:15:39.179699  398989 fix.go:56] duration metric: took 4.748715142s for fixHost
	I1210 06:15:39.179726  398989 start.go:83] releasing machines lock for "default-k8s-diff-port-125336", held for 4.748759367s
	I1210 06:15:39.179795  398989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-125336
	I1210 06:15:39.198657  398989 ssh_runner.go:195] Run: cat /version.json
	I1210 06:15:39.198712  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:39.198747  398989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:15:39.198819  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:39.220204  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:39.220241  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:39.317475  398989 ssh_runner.go:195] Run: systemctl --version
	I1210 06:15:39.391108  398989 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:15:39.430876  398989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:15:39.435737  398989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:15:39.435812  398989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:15:39.444134  398989 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:15:39.444154  398989 start.go:496] detecting cgroup driver to use...
	I1210 06:15:39.444185  398989 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:15:39.444220  398989 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:15:39.458418  398989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:15:39.470158  398989 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:15:39.470210  398989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:15:39.485432  398989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:15:39.497705  398989 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:15:39.587848  398989 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:15:39.679325  398989 docker.go:234] disabling docker service ...
	I1210 06:15:39.679390  398989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:15:39.695744  398989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:15:39.710121  398989 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:15:39.803290  398989 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:15:39.889666  398989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:15:39.901841  398989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:15:39.916001  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:40.053859  398989 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:15:40.053907  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.064032  398989 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:15:40.064119  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.074052  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.082799  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.091069  398989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:15:40.099125  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.108348  398989 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.116442  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.124562  398989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:15:40.131659  398989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:15:40.139831  398989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:15:40.235238  398989 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:15:40.390045  398989 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:15:40.390127  398989 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:15:40.394019  398989 start.go:564] Will wait 60s for crictl version
	I1210 06:15:40.394073  398989 ssh_runner.go:195] Run: which crictl
	I1210 06:15:40.397521  398989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:15:40.422130  398989 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:15:40.422196  398989 ssh_runner.go:195] Run: crio --version
	I1210 06:15:40.449888  398989 ssh_runner.go:195] Run: crio --version
	I1210 06:15:40.482873  398989 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1210 06:15:40.484109  398989 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-125336 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:15:40.504017  398989 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1210 06:15:40.508495  398989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:15:40.519961  398989 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-125336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:15:40.520150  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:40.655009  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:40.788669  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:40.920137  398989 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 06:15:40.920210  398989 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:15:40.955931  398989 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:15:40.955957  398989 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:15:40.955966  398989 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.3 crio true true} ...
	I1210 06:15:40.956107  398989 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-125336 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:15:40.956192  398989 ssh_runner.go:195] Run: crio config
	I1210 06:15:41.004526  398989 cni.go:84] Creating CNI manager for ""
	I1210 06:15:41.004548  398989 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:15:41.004564  398989 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:15:41.004584  398989 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-125336 NodeName:default-k8s-diff-port-125336 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:15:41.004697  398989 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-125336"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:15:41.004752  398989 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1210 06:15:41.013662  398989 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:15:41.013711  398989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:15:41.021680  398989 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1210 06:15:41.034639  398989 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 06:15:41.047897  398989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1210 06:15:41.060681  398989 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:15:41.064298  398989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:15:41.074539  398989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:15:41.167815  398989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:15:41.192312  398989 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336 for IP: 192.168.103.2
	I1210 06:15:41.192334  398989 certs.go:195] generating shared ca certs ...
	I1210 06:15:41.192367  398989 certs.go:227] acquiring lock for ca certs: {Name:mka90f54d579d39a8508aa46a6cef002ccad5d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:41.192505  398989 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key
	I1210 06:15:41.192546  398989 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key
	I1210 06:15:41.192557  398989 certs.go:257] generating profile certs ...
	I1210 06:15:41.192643  398989 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/client.key
	I1210 06:15:41.192694  398989 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/apiserver.key.75b93134
	I1210 06:15:41.192729  398989 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/proxy-client.key
	I1210 06:15:41.192855  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253.pem (1338 bytes)
	W1210 06:15:41.192897  398989 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253_empty.pem, impossibly tiny 0 bytes
	I1210 06:15:41.192910  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:15:41.192952  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:15:41.192986  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:15:41.193016  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem (1679 bytes)
	I1210 06:15:41.193074  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:15:41.193841  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:15:41.212216  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:15:41.230779  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:15:41.249215  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:15:41.273141  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1210 06:15:41.291653  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:15:41.308892  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:15:41.328983  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 06:15:41.348815  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253.pem --> /usr/share/ca-certificates/9253.pem (1338 bytes)
	I1210 06:15:41.369178  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /usr/share/ca-certificates/92532.pem (1708 bytes)
	I1210 06:15:41.390044  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:15:41.407887  398989 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:15:41.422822  398989 ssh_runner.go:195] Run: openssl version
	I1210 06:15:41.430217  398989 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:41.438931  398989 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:15:41.447682  398989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:41.451942  398989 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:41.451995  398989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:41.496117  398989 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:15:41.504580  398989 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9253.pem
	I1210 06:15:41.512960  398989 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9253.pem /etc/ssl/certs/9253.pem
	I1210 06:15:41.521564  398989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9253.pem
	I1210 06:15:41.525244  398989 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:37 /usr/share/ca-certificates/9253.pem
	I1210 06:15:41.525308  398989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9253.pem
	I1210 06:15:41.564172  398989 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:15:41.572852  398989 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/92532.pem
	I1210 06:15:41.580900  398989 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/92532.pem /etc/ssl/certs/92532.pem
	I1210 06:15:41.588301  398989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92532.pem
	I1210 06:15:41.592675  398989 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:37 /usr/share/ca-certificates/92532.pem
	I1210 06:15:41.592721  398989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92532.pem
	I1210 06:15:41.637108  398989 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:15:41.645490  398989 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:15:41.649879  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:15:41.690638  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:15:41.747836  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:15:41.800228  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:15:41.862694  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:15:41.914250  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:15:41.958747  398989 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-125336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:15:41.959041  398989 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:15:41.959166  398989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:15:41.998590  398989 cri.go:89] found id: "92cdc11606d33aee3d477bf6cbe4ab80332206fde18c217d524f557e526b0285"
	I1210 06:15:41.998610  398989 cri.go:89] found id: "2dded97e81369efefb822c9b0c8d6dfd3bbd053fe93054ad3a81cdce1d76f368"
	I1210 06:15:41.998616  398989 cri.go:89] found id: "355b450a39b31a387be491afe63facd495d64617f6108b0a4b1b5123f1758d16"
	I1210 06:15:41.998621  398989 cri.go:89] found id: "4492dccb6c585536103a7303143f56d37e8a4fcd9cebebf3e45723b510e06b9d"
	I1210 06:15:41.998625  398989 cri.go:89] found id: ""
	I1210 06:15:41.998665  398989 ssh_runner.go:195] Run: sudo runc list -f json
	W1210 06:15:42.012230  398989 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:15:42Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:15:42.012308  398989 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:15:42.023047  398989 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:15:42.023062  398989 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:15:42.023133  398989 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:15:42.032028  398989 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:15:42.033327  398989 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-125336" does not appear in /home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:15:42.034299  398989 kubeconfig.go:62] /home/jenkins/minikube-integration/22094-5725/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-125336" cluster setting kubeconfig missing "default-k8s-diff-port-125336" context setting]
	I1210 06:15:42.035703  398989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/kubeconfig: {Name:mkfa60e97179780abb80cddf99075aed36301884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:42.037888  398989 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:15:42.047350  398989 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1210 06:15:42.047375  398989 kubeadm.go:602] duration metric: took 24.306597ms to restartPrimaryControlPlane
	I1210 06:15:42.047383  398989 kubeadm.go:403] duration metric: took 88.644178ms to StartCluster
	I1210 06:15:42.047399  398989 settings.go:142] acquiring lock: {Name:mk8c38e27b37253ca8cb2a2adf6342f0db270902 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:42.047471  398989 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:15:42.049858  398989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/kubeconfig: {Name:mkfa60e97179780abb80cddf99075aed36301884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:42.050141  398989 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:15:42.050363  398989 config.go:182] Loaded profile config "default-k8s-diff-port-125336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:15:42.050409  398989 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:15:42.050484  398989 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-125336"
	I1210 06:15:42.050502  398989 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-125336"
	W1210 06:15:42.050511  398989 addons.go:248] addon storage-provisioner should already be in state true
	I1210 06:15:42.050535  398989 host.go:66] Checking if "default-k8s-diff-port-125336" exists ...
	I1210 06:15:42.051015  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:42.051175  398989 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-125336"
	I1210 06:15:42.051191  398989 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-125336"
	W1210 06:15:42.051199  398989 addons.go:248] addon dashboard should already be in state true
	I1210 06:15:42.051223  398989 host.go:66] Checking if "default-k8s-diff-port-125336" exists ...
	I1210 06:15:42.051559  398989 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-125336"
	I1210 06:15:42.051618  398989 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-125336"
	I1210 06:15:42.051583  398989 out.go:179] * Verifying Kubernetes components...
	I1210 06:15:42.051661  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:42.051950  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:42.053296  398989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:15:42.082195  398989 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:15:42.082199  398989 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 06:15:42.083378  398989 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:15:42.083403  398989 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1210 06:15:37.646711  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	W1210 06:15:39.648042  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	W1210 06:15:41.648596  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	I1210 06:15:42.083414  398989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:15:42.083554  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:42.086520  398989 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-125336"
	W1210 06:15:42.086542  398989 addons.go:248] addon default-storageclass should already be in state true
	I1210 06:15:42.086569  398989 host.go:66] Checking if "default-k8s-diff-port-125336" exists ...
	I1210 06:15:42.086807  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 06:15:42.086824  398989 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 06:15:42.086879  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:42.088501  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:42.127971  398989 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:15:42.127995  398989 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:15:42.128058  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:42.131157  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:42.131148  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:42.163643  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:42.238425  398989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:15:42.261214  398989 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-125336" to be "Ready" ...
	I1210 06:15:42.266856  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 06:15:42.266878  398989 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 06:15:42.273292  398989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:15:42.296500  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 06:15:42.296642  398989 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 06:15:42.316168  398989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:15:42.322727  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 06:15:42.322747  398989 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 06:15:42.342110  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 06:15:42.342132  398989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 06:15:42.364017  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 06:15:42.364037  398989 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 06:15:42.383601  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 06:15:42.383628  398989 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 06:15:42.400222  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 06:15:42.400267  398989 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 06:15:42.413822  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 06:15:42.413841  398989 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 06:15:42.428985  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:15:42.429002  398989 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 06:15:42.445006  398989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:15:43.730716  398989 node_ready.go:49] node "default-k8s-diff-port-125336" is "Ready"
	I1210 06:15:43.730761  398989 node_ready.go:38] duration metric: took 1.469517861s for node "default-k8s-diff-port-125336" to be "Ready" ...
	I1210 06:15:43.730780  398989 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:15:43.730833  398989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:15:44.295467  398989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.022145188s)
	I1210 06:15:44.295527  398989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.979317742s)
	I1210 06:15:44.295605  398989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.850565953s)
	I1210 06:15:44.295731  398989 api_server.go:72] duration metric: took 2.245559846s to wait for apiserver process to appear ...
	I1210 06:15:44.295748  398989 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:15:44.295770  398989 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1210 06:15:44.297453  398989 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-125336 addons enable metrics-server
	
	I1210 06:15:44.301230  398989 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:44.301258  398989 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:44.307227  398989 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1210 06:15:44.147036  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	I1210 06:15:46.146566  389191 pod_ready.go:94] pod "coredns-66bc5c9577-8xwfc" is "Ready"
	I1210 06:15:46.146592  389191 pod_ready.go:86] duration metric: took 37.005340048s for pod "coredns-66bc5c9577-8xwfc" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:46.149120  389191 pod_ready.go:83] waiting for pod "etcd-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:46.152937  389191 pod_ready.go:94] pod "etcd-embed-certs-028500" is "Ready"
	I1210 06:15:46.152956  389191 pod_ready.go:86] duration metric: took 3.81638ms for pod "etcd-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:46.154886  389191 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:46.158540  389191 pod_ready.go:94] pod "kube-apiserver-embed-certs-028500" is "Ready"
	I1210 06:15:46.158566  389191 pod_ready.go:86] duration metric: took 3.65933ms for pod "kube-apiserver-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:46.160461  389191 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:46.345207  389191 pod_ready.go:94] pod "kube-controller-manager-embed-certs-028500" is "Ready"
	I1210 06:15:46.345232  389191 pod_ready.go:86] duration metric: took 184.75138ms for pod "kube-controller-manager-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:46.545176  389191 pod_ready.go:83] waiting for pod "kube-proxy-sr7kh" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:46.945367  389191 pod_ready.go:94] pod "kube-proxy-sr7kh" is "Ready"
	I1210 06:15:46.945391  389191 pod_ready.go:86] duration metric: took 400.193359ms for pod "kube-proxy-sr7kh" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:47.145257  389191 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:47.544937  389191 pod_ready.go:94] pod "kube-scheduler-embed-certs-028500" is "Ready"
	I1210 06:15:47.544958  389191 pod_ready.go:86] duration metric: took 399.673562ms for pod "kube-scheduler-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:47.544969  389191 pod_ready.go:40] duration metric: took 38.406618977s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:15:47.594190  389191 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1210 06:15:47.595325  389191 out.go:179] * Done! kubectl is now configured to use "embed-certs-028500" cluster and "default" namespace by default
	I1210 06:15:44.308766  398989 addons.go:530] duration metric: took 2.258355424s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1210 06:15:44.795874  398989 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1210 06:15:44.800857  398989 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:44.800883  398989 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:45.296231  398989 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1210 06:15:45.301136  398989 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1210 06:15:45.302322  398989 api_server.go:141] control plane version: v1.34.3
	I1210 06:15:45.302347  398989 api_server.go:131] duration metric: took 1.006591687s to wait for apiserver health ...
	I1210 06:15:45.302357  398989 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:15:45.306315  398989 system_pods.go:59] 8 kube-system pods found
	I1210 06:15:45.306352  398989 system_pods.go:61] "coredns-66bc5c9577-gkk6m" [0b83f27c-1359-488f-bf61-c716f522dfad] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:15:45.306367  398989 system_pods.go:61] "etcd-default-k8s-diff-port-125336" [afbeb479-99ed-44cd-b9c3-cda0c638c270] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:15:45.306382  398989 system_pods.go:61] "kindnet-lfds9" [14d4cc08-bd99-41e5-a772-b5197e8b16b6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1210 06:15:45.306398  398989 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-125336" [12a3028f-5f91-4217-bff2-527a5c4a0b4d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:15:45.306414  398989 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-125336" [ee445b76-6256-4d08-a12d-c392acecca93] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:15:45.306429  398989 system_pods.go:61] "kube-proxy-mw5sp" [94c4f93c-3851-4ed9-ae3b-7900e64abf9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 06:15:45.306439  398989 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-125336" [f045b3cd-f095-44a0-9735-47a085eb5d83] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:15:45.306446  398989 system_pods.go:61] "storage-provisioner" [d31f981a-faff-40fd-87cd-c2e5b25f8e2a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:15:45.306457  398989 system_pods.go:74] duration metric: took 4.090626ms to wait for pod list to return data ...
	I1210 06:15:45.306469  398989 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:15:45.309065  398989 default_sa.go:45] found service account: "default"
	I1210 06:15:45.309111  398989 default_sa.go:55] duration metric: took 2.635327ms for default service account to be created ...
	I1210 06:15:45.309121  398989 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 06:15:45.312161  398989 system_pods.go:86] 8 kube-system pods found
	I1210 06:15:45.312188  398989 system_pods.go:89] "coredns-66bc5c9577-gkk6m" [0b83f27c-1359-488f-bf61-c716f522dfad] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:15:45.312199  398989 system_pods.go:89] "etcd-default-k8s-diff-port-125336" [afbeb479-99ed-44cd-b9c3-cda0c638c270] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:15:45.312211  398989 system_pods.go:89] "kindnet-lfds9" [14d4cc08-bd99-41e5-a772-b5197e8b16b6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1210 06:15:45.312295  398989 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-125336" [12a3028f-5f91-4217-bff2-527a5c4a0b4d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:15:45.312334  398989 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-125336" [ee445b76-6256-4d08-a12d-c392acecca93] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:15:45.312348  398989 system_pods.go:89] "kube-proxy-mw5sp" [94c4f93c-3851-4ed9-ae3b-7900e64abf9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 06:15:45.312364  398989 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-125336" [f045b3cd-f095-44a0-9735-47a085eb5d83] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:15:45.312380  398989 system_pods.go:89] "storage-provisioner" [d31f981a-faff-40fd-87cd-c2e5b25f8e2a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:15:45.312393  398989 system_pods.go:126] duration metric: took 3.26398ms to wait for k8s-apps to be running ...
	I1210 06:15:45.312421  398989 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 06:15:45.312464  398989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:15:45.330746  398989 system_svc.go:56] duration metric: took 18.317711ms WaitForService to wait for kubelet
	I1210 06:15:45.330808  398989 kubeadm.go:587] duration metric: took 3.280637081s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:15:45.330849  398989 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:15:45.333665  398989 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:15:45.333690  398989 node_conditions.go:123] node cpu capacity is 8
	I1210 06:15:45.333707  398989 node_conditions.go:105] duration metric: took 2.852028ms to run NodePressure ...
	I1210 06:15:45.333720  398989 start.go:242] waiting for startup goroutines ...
	I1210 06:15:45.333730  398989 start.go:247] waiting for cluster config update ...
	I1210 06:15:45.333744  398989 start.go:256] writing updated cluster config ...
	I1210 06:15:45.334096  398989 ssh_runner.go:195] Run: rm -f paused
	I1210 06:15:45.338120  398989 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:15:45.341568  398989 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gkk6m" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 06:15:47.347196  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:15:49.347509  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:15:51.348265  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:15:53.847175  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:15:56.347151  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:15:58.846930  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:16:01.346818  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:16:03.847144  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:16:06.346268  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:16:08.346933  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:16:10.848695  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:16:13.346002  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:16:15.346439  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:16:17.346561  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:16:19.845877  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	I1210 06:16:21.846025  398989 pod_ready.go:94] pod "coredns-66bc5c9577-gkk6m" is "Ready"
	I1210 06:16:21.846050  398989 pod_ready.go:86] duration metric: took 36.504462899s for pod "coredns-66bc5c9577-gkk6m" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:16:21.848259  398989 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-125336" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:16:21.851608  398989 pod_ready.go:94] pod "etcd-default-k8s-diff-port-125336" is "Ready"
	I1210 06:16:21.851627  398989 pod_ready.go:86] duration metric: took 3.347943ms for pod "etcd-default-k8s-diff-port-125336" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:16:21.853330  398989 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-125336" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:16:21.856567  398989 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-125336" is "Ready"
	I1210 06:16:21.856584  398989 pod_ready.go:86] duration metric: took 3.238739ms for pod "kube-apiserver-default-k8s-diff-port-125336" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:16:21.858225  398989 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-125336" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:16:22.044224  398989 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-125336" is "Ready"
	I1210 06:16:22.044251  398989 pod_ready.go:86] duration metric: took 186.009559ms for pod "kube-controller-manager-default-k8s-diff-port-125336" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:16:22.244699  398989 pod_ready.go:83] waiting for pod "kube-proxy-mw5sp" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:16:22.644027  398989 pod_ready.go:94] pod "kube-proxy-mw5sp" is "Ready"
	I1210 06:16:22.644053  398989 pod_ready.go:86] duration metric: took 399.322725ms for pod "kube-proxy-mw5sp" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:16:22.844992  398989 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-125336" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:16:23.244069  398989 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-125336" is "Ready"
	I1210 06:16:23.244122  398989 pod_ready.go:86] duration metric: took 399.106623ms for pod "kube-scheduler-default-k8s-diff-port-125336" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:16:23.244133  398989 pod_ready.go:40] duration metric: took 37.90598909s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:16:23.286217  398989 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1210 06:16:23.287910  398989 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-125336" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 06:15:55 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:15:55.139323378Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 06:15:55 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:15:55.142385274Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 06:15:55 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:15:55.142408656Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 06:16:12 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:12.281263943Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2abff1d7-3805-4426-b796-9e98a9840237 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:16:12 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:12.28216073Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4eb91fd3-19cf-4c38-9203-a391d210557d name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:16:12 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:12.283135201Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-22cr4/dashboard-metrics-scraper" id=f8b57e94-d40b-4faf-a34f-cb300d83c405 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:16:12 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:12.283275079Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:16:12 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:12.288650034Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:16:12 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:12.289109378Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:16:12 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:12.31567174Z" level=info msg="Created container 62a08a36d08daed6e588bb1c0c295b57b19f2241aeb341608a013625741caae5: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-22cr4/dashboard-metrics-scraper" id=f8b57e94-d40b-4faf-a34f-cb300d83c405 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:16:12 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:12.316201931Z" level=info msg="Starting container: 62a08a36d08daed6e588bb1c0c295b57b19f2241aeb341608a013625741caae5" id=d2e1c792-d5ea-4035-99b0-2e56e134938d name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:16:12 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:12.317830236Z" level=info msg="Started container" PID=1755 containerID=62a08a36d08daed6e588bb1c0c295b57b19f2241aeb341608a013625741caae5 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-22cr4/dashboard-metrics-scraper id=d2e1c792-d5ea-4035-99b0-2e56e134938d name=/runtime.v1.RuntimeService/StartContainer sandboxID=4a7a83ecfa1b79f71e998c7f947ce050845136820524f6971b9a0b6a6cf1652e
	Dec 10 06:16:12 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:12.395602908Z" level=info msg="Removing container: 8065ccb28c02c2e61eb0bae17d5b495be2a745ba630030b70be1cc6f54a5361b" id=26511f8e-494d-4de7-bc16-3c575821f6b5 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:16:12 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:12.405120703Z" level=info msg="Removed container 8065ccb28c02c2e61eb0bae17d5b495be2a745ba630030b70be1cc6f54a5361b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-22cr4/dashboard-metrics-scraper" id=26511f8e-494d-4de7-bc16-3c575821f6b5 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:16:15 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:15.403621758Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=083d919f-79ad-4997-be05-617c36fcd009 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:16:15 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:15.404554407Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b37a0a9f-f1d6-4eb3-a724-73ba9ac7e514 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:16:15 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:15.405648994Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=f3254d85-5736-4d45-8d2e-436ec5ebd790 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:16:15 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:15.40587892Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:16:15 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:15.41142929Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:16:15 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:15.411625833Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/f3d26cb1f287ad5fb35cea3469386736dc484e18dd680b5f260ee19cc4aea704/merged/etc/passwd: no such file or directory"
	Dec 10 06:16:15 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:15.41166157Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f3d26cb1f287ad5fb35cea3469386736dc484e18dd680b5f260ee19cc4aea704/merged/etc/group: no such file or directory"
	Dec 10 06:16:15 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:15.411960713Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:16:15 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:15.44277752Z" level=info msg="Created container cca3ae445cd55d0ae0acad0517846ebb38a9d80446a5793c2115b59a45a3c93f: kube-system/storage-provisioner/storage-provisioner" id=f3254d85-5736-4d45-8d2e-436ec5ebd790 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:16:15 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:15.443324706Z" level=info msg="Starting container: cca3ae445cd55d0ae0acad0517846ebb38a9d80446a5793c2115b59a45a3c93f" id=85e98f9f-8003-46c7-a217-a9d3b3768951 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:16:15 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:15.44514755Z" level=info msg="Started container" PID=1772 containerID=cca3ae445cd55d0ae0acad0517846ebb38a9d80446a5793c2115b59a45a3c93f description=kube-system/storage-provisioner/storage-provisioner id=85e98f9f-8003-46c7-a217-a9d3b3768951 name=/runtime.v1.RuntimeService/StartContainer sandboxID=571c423b375318362761373f92e5929ec59453acc866deb9de4641db3bcee7c7
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	cca3ae445cd55       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   571c423b37531       storage-provisioner                                    kube-system
	62a08a36d08da       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago      Exited              dashboard-metrics-scraper   2                   4a7a83ecfa1b7       dashboard-metrics-scraper-6ffb444bf9-22cr4             kubernetes-dashboard
	164632a10922a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   cf9bd9d5dec16       kubernetes-dashboard-855c9754f9-ccjtq                  kubernetes-dashboard
	a43f1b12bb382       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   7a4151a9a9ba4       busybox                                                default
	d008c80af5289       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   24c7af7f3a846       coredns-66bc5c9577-gkk6m                               kube-system
	4cae6db5d58d5       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   51bfcadda644c       kindnet-lfds9                                          kube-system
	9adb58aed15d4       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                           54 seconds ago      Running             kube-proxy                  0                   196964bcf9837       kube-proxy-mw5sp                                       kube-system
	8ed0496d0be7e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   571c423b37531       storage-provisioner                                    kube-system
	92cdc11606d33       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                           56 seconds ago      Running             kube-apiserver              0                   d68383e6a1e35       kube-apiserver-default-k8s-diff-port-125336            kube-system
	2dded97e81369       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                           56 seconds ago      Running             kube-scheduler              0                   36c6b7cd8ae45       kube-scheduler-default-k8s-diff-port-125336            kube-system
	355b450a39b31       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           56 seconds ago      Running             etcd                        0                   81a3c85adf9f1       etcd-default-k8s-diff-port-125336                      kube-system
	4492dccb6c585       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                           56 seconds ago      Running             kube-controller-manager     0                   8c54dcd4d13a7       kube-controller-manager-default-k8s-diff-port-125336   kube-system
	
	
	==> coredns [d008c80af528911395e273d86d6218ebc4d984547613f7aacea14e288dffe717] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54972 - 8444 "HINFO IN 8248547206015904360.4558364408995920036. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.516199534s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-125336
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-125336
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602
	                    minikube.k8s.io/name=default-k8s-diff-port-125336
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_14_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:14:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-125336
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:16:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:16:24 +0000   Wed, 10 Dec 2025 06:14:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:16:24 +0000   Wed, 10 Dec 2025 06:14:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:16:24 +0000   Wed, 10 Dec 2025 06:14:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 06:16:24 +0000   Wed, 10 Dec 2025 06:15:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-125336
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                f4329173-01c3-494e-8c73-1314ca67fddf
	  Boot ID:                    b1b789e7-29ca-41f0-9541-8c4ef16372aa
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-gkk6m                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-default-k8s-diff-port-125336                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-lfds9                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-default-k8s-diff-port-125336             250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-125336    200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-mw5sp                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-default-k8s-diff-port-125336             100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-22cr4              0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-ccjtq                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 110s               kube-proxy       
	  Normal  Starting                 53s                kube-proxy       
	  Normal  NodeHasSufficientMemory  117s               kubelet          Node default-k8s-diff-port-125336 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s               kubelet          Node default-k8s-diff-port-125336 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s               kubelet          Node default-k8s-diff-port-125336 status is now: NodeHasSufficientPID
	  Normal  Starting                 117s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s               node-controller  Node default-k8s-diff-port-125336 event: Registered Node default-k8s-diff-port-125336 in Controller
	  Normal  NodeReady                98s                kubelet          Node default-k8s-diff-port-125336 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)  kubelet          Node default-k8s-diff-port-125336 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet          Node default-k8s-diff-port-125336 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)  kubelet          Node default-k8s-diff-port-125336 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                node-controller  Node default-k8s-diff-port-125336 event: Registered Node default-k8s-diff-port-125336 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e ac 6a 3a 10 14 08 06
	[  +0.000389] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e1 45 1e 59 dc 08 06
	[ +12.231886] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff aa b6 c3 b5 b8 e1 08 06
	[  +0.018522] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff f6 5b 96 ba 91 6c 08 06
	[Dec10 06:13] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 91 ba 36 9a ae 08 06
	[  +0.002987] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 7f a1 c5 f7 73 08 06
	[  +1.205570] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 04 b2 ab d7 49 08 06
	[  +4.623767] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 10 2d 23 5f e6 08 06
	[  +0.000315] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 5b 96 ba 91 6c 08 06
	[ +12.537493] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 fa d0 2a 46 66 08 06
	[  +0.000395] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 16 04 b2 ab d7 49 08 06
	[ +31.413502] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 1b 61 8f e3 57 08 06
	[  +0.000352] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fa 91 ba 36 9a ae 08 06
	
	
	==> etcd [355b450a39b31a387be491afe63facd495d64617f6108b0a4b1b5123f1758d16] <==
	{"level":"warn","ts":"2025-12-10T06:15:43.030143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.040258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.049628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.056462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.063374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.071161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.078986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.086930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.093412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.104204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.111901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.119210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.126035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.132883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.140382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.147777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.155255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.163330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.171418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.179128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.187752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.208723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.216555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.230615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.288109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60138","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 06:16:38 up 59 min,  0 user,  load average: 2.39, 3.94, 2.85
	Linux default-k8s-diff-port-125336 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4cae6db5d58d5f55296fdd99cb88edcd6d5f157201404a728337fd012e8f1b6e] <==
	I1210 06:15:44.924142       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 06:15:44.924533       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1210 06:15:44.924766       1 main.go:148] setting mtu 1500 for CNI 
	I1210 06:15:44.924789       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 06:15:44.924816       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T06:15:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 06:15:45.124676       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 06:15:45.124706       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 06:15:45.124718       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 06:15:45.124894       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 06:15:45.625509       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 06:15:45.625545       1 metrics.go:72] Registering metrics
	I1210 06:15:45.625649       1 controller.go:711] "Syncing nftables rules"
	I1210 06:15:55.125060       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1210 06:15:55.125153       1 main.go:301] handling current node
	I1210 06:16:05.124973       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1210 06:16:05.125010       1 main.go:301] handling current node
	I1210 06:16:15.124188       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1210 06:16:15.124243       1 main.go:301] handling current node
	I1210 06:16:25.124970       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1210 06:16:25.125016       1 main.go:301] handling current node
	I1210 06:16:35.124255       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1210 06:16:35.124293       1 main.go:301] handling current node
	
	
	==> kube-apiserver [92cdc11606d33aee3d477bf6cbe4ab80332206fde18c217d524f557e526b0285] <==
	I1210 06:15:43.775742       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1210 06:15:43.778061       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1210 06:15:43.778100       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1210 06:15:43.778991       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1210 06:15:43.775230       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1210 06:15:43.781573       1 aggregator.go:171] initial CRD sync complete...
	I1210 06:15:43.781973       1 autoregister_controller.go:144] Starting autoregister controller
	I1210 06:15:43.781981       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 06:15:43.781988       1 cache.go:39] Caches are synced for autoregister controller
	I1210 06:15:43.781648       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1210 06:15:43.793812       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1210 06:15:43.795327       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1210 06:15:43.844374       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:15:43.858364       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1210 06:15:44.069118       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 06:15:44.096650       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 06:15:44.113152       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:15:44.120316       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:15:44.126604       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 06:15:44.157039       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.126.99"}
	I1210 06:15:44.165345       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.217.36"}
	I1210 06:15:44.680570       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 06:15:47.162115       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 06:15:47.262511       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 06:15:47.614195       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4492dccb6c585536103a7303143f56d37e8a4fcd9cebebf3e45723b510e06b9d] <==
	I1210 06:15:47.108821       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 06:15:47.108833       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1210 06:15:47.108841       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1210 06:15:47.108852       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1210 06:15:47.108855       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1210 06:15:47.108821       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1210 06:15:47.109060       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1210 06:15:47.109185       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1210 06:15:47.109293       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1210 06:15:47.109389       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1210 06:15:47.109428       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-125336"
	I1210 06:15:47.109404       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1210 06:15:47.109479       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1210 06:15:47.109558       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1210 06:15:47.110707       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1210 06:15:47.110772       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1210 06:15:47.111403       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1210 06:15:47.111570       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1210 06:15:47.113839       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1210 06:15:47.113912       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 06:15:47.115426       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1210 06:15:47.117684       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1210 06:15:47.119945       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1210 06:15:47.121153       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1210 06:15:47.134570       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [9adb58aed15d4ed422818a0c187aae20692217265ccf3d9f6007cd504c1d8982] <==
	I1210 06:15:44.711050       1 server_linux.go:53] "Using iptables proxy"
	I1210 06:15:44.772999       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 06:15:44.873970       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 06:15:44.874016       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1210 06:15:44.874128       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:15:44.902358       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:15:44.902417       1 server_linux.go:132] "Using iptables Proxier"
	I1210 06:15:44.908045       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:15:44.908558       1 server.go:527] "Version info" version="v1.34.3"
	I1210 06:15:44.908621       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:15:44.910219       1 config.go:200] "Starting service config controller"
	I1210 06:15:44.910292       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:15:44.910246       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:15:44.910359       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:15:44.910262       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:15:44.910418       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:15:44.910364       1 config.go:309] "Starting node config controller"
	I1210 06:15:44.910463       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:15:44.910493       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:15:45.010399       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 06:15:45.010484       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 06:15:45.010536       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [2dded97e81369efefb822c9b0c8d6dfd3bbd053fe93054ad3a81cdce1d76f368] <==
	I1210 06:15:42.983312       1 serving.go:386] Generated self-signed cert in-memory
	I1210 06:15:44.340733       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1210 06:15:44.340756       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:15:44.346476       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1210 06:15:44.346537       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1210 06:15:44.346593       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1210 06:15:44.346605       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:15:44.346612       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1210 06:15:44.346615       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:15:44.347004       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 06:15:44.347028       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1210 06:15:44.447010       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:15:44.447050       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1210 06:15:44.447011       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Dec 10 06:15:47 default-k8s-diff-port-125336 kubelet[715]: I1210 06:15:47.868805     715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/df90f057-bca7-448f-9c97-e9439334019b-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-ccjtq\" (UID: \"df90f057-bca7-448f-9c97-e9439334019b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ccjtq"
	Dec 10 06:15:47 default-k8s-diff-port-125336 kubelet[715]: I1210 06:15:47.868886     715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz97g\" (UniqueName: \"kubernetes.io/projected/df90f057-bca7-448f-9c97-e9439334019b-kube-api-access-xz97g\") pod \"kubernetes-dashboard-855c9754f9-ccjtq\" (UID: \"df90f057-bca7-448f-9c97-e9439334019b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ccjtq"
	Dec 10 06:15:50 default-k8s-diff-port-125336 kubelet[715]: I1210 06:15:50.336631     715 scope.go:117] "RemoveContainer" containerID="380ad23b0672bd065615d1a14119ffb5390b95316c302017dd727738fe16e357"
	Dec 10 06:15:51 default-k8s-diff-port-125336 kubelet[715]: I1210 06:15:51.342044     715 scope.go:117] "RemoveContainer" containerID="380ad23b0672bd065615d1a14119ffb5390b95316c302017dd727738fe16e357"
	Dec 10 06:15:51 default-k8s-diff-port-125336 kubelet[715]: I1210 06:15:51.342385     715 scope.go:117] "RemoveContainer" containerID="8065ccb28c02c2e61eb0bae17d5b495be2a745ba630030b70be1cc6f54a5361b"
	Dec 10 06:15:51 default-k8s-diff-port-125336 kubelet[715]: E1210 06:15:51.342574     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-22cr4_kubernetes-dashboard(543e8691-57ae-481e-9a20-7e195c61596e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-22cr4" podUID="543e8691-57ae-481e-9a20-7e195c61596e"
	Dec 10 06:15:51 default-k8s-diff-port-125336 kubelet[715]: I1210 06:15:51.500715     715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 10 06:15:52 default-k8s-diff-port-125336 kubelet[715]: I1210 06:15:52.347022     715 scope.go:117] "RemoveContainer" containerID="8065ccb28c02c2e61eb0bae17d5b495be2a745ba630030b70be1cc6f54a5361b"
	Dec 10 06:15:52 default-k8s-diff-port-125336 kubelet[715]: E1210 06:15:52.347260     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-22cr4_kubernetes-dashboard(543e8691-57ae-481e-9a20-7e195c61596e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-22cr4" podUID="543e8691-57ae-481e-9a20-7e195c61596e"
	Dec 10 06:15:53 default-k8s-diff-port-125336 kubelet[715]: I1210 06:15:53.360627     715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ccjtq" podStartSLOduration=1.183017937 podStartE2EDuration="6.360605138s" podCreationTimestamp="2025-12-10 06:15:47 +0000 UTC" firstStartedPulling="2025-12-10 06:15:48.067712302 +0000 UTC m=+6.873535688" lastFinishedPulling="2025-12-10 06:15:53.245299497 +0000 UTC m=+12.051122889" observedRunningTime="2025-12-10 06:15:53.360407716 +0000 UTC m=+12.166231112" watchObservedRunningTime="2025-12-10 06:15:53.360605138 +0000 UTC m=+12.166428533"
	Dec 10 06:15:59 default-k8s-diff-port-125336 kubelet[715]: I1210 06:15:59.494497     715 scope.go:117] "RemoveContainer" containerID="8065ccb28c02c2e61eb0bae17d5b495be2a745ba630030b70be1cc6f54a5361b"
	Dec 10 06:15:59 default-k8s-diff-port-125336 kubelet[715]: E1210 06:15:59.494667     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-22cr4_kubernetes-dashboard(543e8691-57ae-481e-9a20-7e195c61596e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-22cr4" podUID="543e8691-57ae-481e-9a20-7e195c61596e"
	Dec 10 06:16:12 default-k8s-diff-port-125336 kubelet[715]: I1210 06:16:12.280709     715 scope.go:117] "RemoveContainer" containerID="8065ccb28c02c2e61eb0bae17d5b495be2a745ba630030b70be1cc6f54a5361b"
	Dec 10 06:16:12 default-k8s-diff-port-125336 kubelet[715]: I1210 06:16:12.394324     715 scope.go:117] "RemoveContainer" containerID="8065ccb28c02c2e61eb0bae17d5b495be2a745ba630030b70be1cc6f54a5361b"
	Dec 10 06:16:12 default-k8s-diff-port-125336 kubelet[715]: I1210 06:16:12.394586     715 scope.go:117] "RemoveContainer" containerID="62a08a36d08daed6e588bb1c0c295b57b19f2241aeb341608a013625741caae5"
	Dec 10 06:16:12 default-k8s-diff-port-125336 kubelet[715]: E1210 06:16:12.394797     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-22cr4_kubernetes-dashboard(543e8691-57ae-481e-9a20-7e195c61596e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-22cr4" podUID="543e8691-57ae-481e-9a20-7e195c61596e"
	Dec 10 06:16:15 default-k8s-diff-port-125336 kubelet[715]: I1210 06:16:15.403211     715 scope.go:117] "RemoveContainer" containerID="8ed0496d0be7e2940a2664370db02c5f77609ff39d181f5c13426a0ee6fa740b"
	Dec 10 06:16:19 default-k8s-diff-port-125336 kubelet[715]: I1210 06:16:19.494189     715 scope.go:117] "RemoveContainer" containerID="62a08a36d08daed6e588bb1c0c295b57b19f2241aeb341608a013625741caae5"
	Dec 10 06:16:19 default-k8s-diff-port-125336 kubelet[715]: E1210 06:16:19.494421     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-22cr4_kubernetes-dashboard(543e8691-57ae-481e-9a20-7e195c61596e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-22cr4" podUID="543e8691-57ae-481e-9a20-7e195c61596e"
	Dec 10 06:16:31 default-k8s-diff-port-125336 kubelet[715]: I1210 06:16:31.280689     715 scope.go:117] "RemoveContainer" containerID="62a08a36d08daed6e588bb1c0c295b57b19f2241aeb341608a013625741caae5"
	Dec 10 06:16:31 default-k8s-diff-port-125336 kubelet[715]: E1210 06:16:31.280950     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-22cr4_kubernetes-dashboard(543e8691-57ae-481e-9a20-7e195c61596e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-22cr4" podUID="543e8691-57ae-481e-9a20-7e195c61596e"
	Dec 10 06:16:35 default-k8s-diff-port-125336 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 06:16:35 default-k8s-diff-port-125336 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 06:16:35 default-k8s-diff-port-125336 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:16:35 default-k8s-diff-port-125336 systemd[1]: kubelet.service: Consumed 1.559s CPU time.
	
	
	==> kubernetes-dashboard [164632a10922a2106f042cad684065136ba79e69def7383698535847ea79adde] <==
	2025/12/10 06:15:53 Starting overwatch
	2025/12/10 06:15:53 Using namespace: kubernetes-dashboard
	2025/12/10 06:15:53 Using in-cluster config to connect to apiserver
	2025/12/10 06:15:53 Using secret token for csrf signing
	2025/12/10 06:15:53 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/10 06:15:53 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/10 06:15:53 Successful initial request to the apiserver, version: v1.34.3
	2025/12/10 06:15:53 Generating JWE encryption key
	2025/12/10 06:15:53 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/10 06:15:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/10 06:15:53 Initializing JWE encryption key from synchronized object
	2025/12/10 06:15:53 Creating in-cluster Sidecar client
	2025/12/10 06:15:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 06:15:53 Serving insecurely on HTTP port: 9090
	2025/12/10 06:16:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [8ed0496d0be7e2940a2664370db02c5f77609ff39d181f5c13426a0ee6fa740b] <==
	I1210 06:15:44.675101       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1210 06:16:14.678223       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [cca3ae445cd55d0ae0acad0517846ebb38a9d80446a5793c2115b59a45a3c93f] <==
	I1210 06:16:15.456770       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 06:16:15.463305       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 06:16:15.463345       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1210 06:16:15.465196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:16:18.919227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:16:23.179840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:16:26.778196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:16:29.831030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:16:32.853303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:16:32.857205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:16:32.857368       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 06:16:32.857496       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-125336_4b30d9b0-a2b1-463a-bebf-faa373cc0f9b!
	I1210 06:16:32.857501       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8e5ce82f-82e7-4b42-b704-b5ef142d393d", APIVersion:"v1", ResourceVersion:"636", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-125336_4b30d9b0-a2b1-463a-bebf-faa373cc0f9b became leader
	W1210 06:16:32.859740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:16:32.862716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:16:32.958593       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-125336_4b30d9b0-a2b1-463a-bebf-faa373cc0f9b!
	W1210 06:16:34.865511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:16:34.869413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:16:36.872849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:16:36.876579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:16:38.879192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:16:38.884666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-125336 -n default-k8s-diff-port-125336
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-125336 -n default-k8s-diff-port-125336: exit status 2 (314.312337ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-125336 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-125336
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-125336:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2b7aea94b35697845da2f4c16e920629381627ad8fcce3f7bf5029e3a85cdf22",
	        "Created": "2025-12-10T06:14:12.606946513Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 399194,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:15:34.491908876Z",
	            "FinishedAt": "2025-12-10T06:15:33.349641769Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/2b7aea94b35697845da2f4c16e920629381627ad8fcce3f7bf5029e3a85cdf22/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2b7aea94b35697845da2f4c16e920629381627ad8fcce3f7bf5029e3a85cdf22/hostname",
	        "HostsPath": "/var/lib/docker/containers/2b7aea94b35697845da2f4c16e920629381627ad8fcce3f7bf5029e3a85cdf22/hosts",
	        "LogPath": "/var/lib/docker/containers/2b7aea94b35697845da2f4c16e920629381627ad8fcce3f7bf5029e3a85cdf22/2b7aea94b35697845da2f4c16e920629381627ad8fcce3f7bf5029e3a85cdf22-json.log",
	        "Name": "/default-k8s-diff-port-125336",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-125336:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-125336",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2b7aea94b35697845da2f4c16e920629381627ad8fcce3f7bf5029e3a85cdf22",
	                "LowerDir": "/var/lib/docker/overlay2/eee672556c7e645ad7270e0982a18173816f8e37df04d4f2836ca903314bd268-init/diff:/var/lib/docker/overlay2/b62e2f8db4877fd6b32453256d2aeab173581bfdfbed6c87a5c3b6dd49dbb983/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eee672556c7e645ad7270e0982a18173816f8e37df04d4f2836ca903314bd268/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eee672556c7e645ad7270e0982a18173816f8e37df04d4f2836ca903314bd268/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eee672556c7e645ad7270e0982a18173816f8e37df04d4f2836ca903314bd268/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-125336",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-125336/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-125336",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-125336",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-125336",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f28bc84dd74ed2eac0658cae2f7cf7483c2ce290d0ef9abc8468a25bedd38574",
	            "SandboxKey": "/var/run/docker/netns/f28bc84dd74e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-125336": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6dcc364cf8d2e6fffb8ab01503e1fba4cf2ae27c41034eeff5b62eed98af1ff5",
	                    "EndpointID": "5011876e8acd6ec08b6dd9bdbc1413b661254c8bc35f7519eb83a791206ba16d",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "d2:2b:53:d8:3f:ed",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-125336",
	                        "2b7aea94b356"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-125336 -n default-k8s-diff-port-125336
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-125336 -n default-k8s-diff-port-125336: exit status 2 (305.123726ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-125336 logs -n 25
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ start   │ -p newest-cni-218688 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable dashboard -p embed-certs-028500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	│ start   │ -p embed-certs-028500 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-125336 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-125336 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable metrics-server -p newest-cni-218688 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ stop    │ -p newest-cni-218688 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable dashboard -p newest-cni-218688 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ start   │ -p newest-cni-218688 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-125336 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ start   │ -p default-k8s-diff-port-125336 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:16 UTC │
	│ image   │ newest-cni-218688 image list --format=json                                                                                                                                                                                                         │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ pause   │ -p newest-cni-218688 --alsologtostderr -v=1                                                                                                                                                                                                        │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ image   │ no-preload-468539 image list --format=json                                                                                                                                                                                                         │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ pause   │ -p no-preload-468539 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ delete  │ -p newest-cni-218688                                                                                                                                                                                                                               │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ delete  │ -p newest-cni-218688                                                                                                                                                                                                                               │ newest-cni-218688            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ delete  │ -p no-preload-468539                                                                                                                                                                                                                               │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ delete  │ -p no-preload-468539                                                                                                                                                                                                                               │ no-preload-468539            │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ image   │ embed-certs-028500 image list --format=json                                                                                                                                                                                                        │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │ 10 Dec 25 06:15 UTC │
	│ pause   │ -p embed-certs-028500 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:15 UTC │                     │
	│ delete  │ -p embed-certs-028500                                                                                                                                                                                                                              │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:16 UTC │ 10 Dec 25 06:16 UTC │
	│ delete  │ -p embed-certs-028500                                                                                                                                                                                                                              │ embed-certs-028500           │ jenkins │ v1.37.0 │ 10 Dec 25 06:16 UTC │ 10 Dec 25 06:16 UTC │
	│ image   │ default-k8s-diff-port-125336 image list --format=json                                                                                                                                                                                              │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:16 UTC │ 10 Dec 25 06:16 UTC │
	│ pause   │ -p default-k8s-diff-port-125336 --alsologtostderr -v=1                                                                                                                                                                                             │ default-k8s-diff-port-125336 │ jenkins │ v1.37.0 │ 10 Dec 25 06:16 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:15:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:15:34.136263  398989 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:15:34.136365  398989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:15:34.136370  398989 out.go:374] Setting ErrFile to fd 2...
	I1210 06:15:34.136374  398989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:15:34.136589  398989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 06:15:34.137019  398989 out.go:368] Setting JSON to false
	I1210 06:15:34.138324  398989 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3478,"bootTime":1765343856,"procs":474,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:15:34.138383  398989 start.go:143] virtualization: kvm guest
	I1210 06:15:34.140369  398989 out.go:179] * [default-k8s-diff-port-125336] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:15:34.141455  398989 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:15:34.141495  398989 notify.go:221] Checking for updates...
	I1210 06:15:34.144149  398989 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:15:34.145219  398989 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:15:34.146212  398989 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube
	I1210 06:15:34.147189  398989 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:15:34.148570  398989 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:15:34.150487  398989 config.go:182] Loaded profile config "default-k8s-diff-port-125336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:15:34.151311  398989 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:15:34.181230  398989 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:15:34.181357  398989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:15:34.246485  398989 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 06:15:34.23498397 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:15:34.246649  398989 docker.go:319] overlay module found
	I1210 06:15:34.248892  398989 out.go:179] * Using the docker driver based on existing profile
	I1210 06:15:34.250044  398989 start.go:309] selected driver: docker
	I1210 06:15:34.250071  398989 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-125336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:15:34.250210  398989 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:15:34.250813  398989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:15:34.316341  398989 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 06:15:34.305292083 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:15:34.316682  398989 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:15:34.316710  398989 cni.go:84] Creating CNI manager for ""
	I1210 06:15:34.316776  398989 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:15:34.316830  398989 start.go:353] cluster config:
	{Name:default-k8s-diff-port-125336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:15:34.318321  398989 out.go:179] * Starting "default-k8s-diff-port-125336" primary control-plane node in "default-k8s-diff-port-125336" cluster
	I1210 06:15:34.319196  398989 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:15:34.320175  398989 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:15:34.321155  398989 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 06:15:34.321256  398989 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	W1210 06:15:34.344393  398989 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1210 06:15:34.347229  398989 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:15:34.347250  398989 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 06:15:34.430385  398989 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1210 06:15:34.430536  398989 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/config.json ...
	I1210 06:15:34.430685  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:34.430831  398989 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:15:34.430871  398989 start.go:360] acquireMachinesLock for default-k8s-diff-port-125336: {Name:mk1b9a5beba896eecc2201d27beab95b8159d676 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.430953  398989 start.go:364] duration metric: took 37.573µs to acquireMachinesLock for "default-k8s-diff-port-125336"
	I1210 06:15:34.430971  398989 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:15:34.430976  398989 fix.go:54] fixHost starting: 
	I1210 06:15:34.431250  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:34.454438  398989 fix.go:112] recreateIfNeeded on default-k8s-diff-port-125336: state=Stopped err=<nil>
	W1210 06:15:34.454482  398989 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:15:33.023453  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 06:15:33.023497  396996 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 06:15:33.023579  396996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:33.044470  396996 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:15:33.044498  396996 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:15:33.044561  396996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-218688
	I1210 06:15:33.055221  396996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:33.060071  396996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:33.070394  396996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/newest-cni-218688/id_rsa Username:docker}
	I1210 06:15:33.143159  396996 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:15:33.157435  396996 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:15:33.157507  396996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:15:33.170632  396996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:15:33.171889  396996 api_server.go:72] duration metric: took 184.694932ms to wait for apiserver process to appear ...
	I1210 06:15:33.171914  396996 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:15:33.171932  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:33.175983  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 06:15:33.176026  396996 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 06:15:33.187123  396996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:15:33.192327  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 06:15:33.192345  396996 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 06:15:33.208241  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 06:15:33.208263  396996 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 06:15:33.223466  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 06:15:33.223489  396996 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 06:15:33.239352  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 06:15:33.239373  396996 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 06:15:33.254731  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 06:15:33.254747  396996 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 06:15:33.268149  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 06:15:33.268164  396996 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 06:15:33.281962  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 06:15:33.281981  396996 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 06:15:33.294762  396996 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:15:33.294777  396996 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 06:15:33.308261  396996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:15:34.066152  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 06:15:34.066176  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 06:15:34.066192  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:34.079065  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 06:15:34.079117  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 06:15:34.172751  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:34.179376  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:34.179407  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:34.672823  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:34.677978  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:34.678023  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:34.680262  396996 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.509569955s)
	I1210 06:15:34.680319  396996 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.493167455s)
	I1210 06:15:34.680472  396996 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.372172224s)
	I1210 06:15:34.684547  396996 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-218688 addons enable metrics-server
	
	I1210 06:15:34.693826  396996 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1210 06:15:34.695479  396996 addons.go:530] duration metric: took 1.708260214s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1210 06:15:35.172871  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:35.178128  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:35.178152  396996 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:35.672391  396996 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 06:15:35.676418  396996 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1210 06:15:35.677341  396996 api_server.go:141] control plane version: v1.35.0-rc.1
	I1210 06:15:35.677363  396996 api_server.go:131] duration metric: took 2.505442988s to wait for apiserver health ...
	I1210 06:15:35.677373  396996 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:15:35.680615  396996 system_pods.go:59] 8 kube-system pods found
	I1210 06:15:35.680642  396996 system_pods.go:61] "coredns-7d764666f9-44pd7" [59f9ee36-231a-4116-a88e-60d48b054690] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1210 06:15:35.680651  396996 system_pods.go:61] "etcd-newest-cni-218688" [c27a2601-2917-44f3-966c-b554d5b92c02] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:15:35.680657  396996 system_pods.go:61] "kindnet-n75st" [33becf6b-71b4-4682-81bc-c41d280389e3] Running
	I1210 06:15:35.680665  396996 system_pods.go:61] "kube-apiserver-newest-cni-218688" [a423257c-9365-4560-865a-9de59f0aafeb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:15:35.680674  396996 system_pods.go:61] "kube-controller-manager-newest-cni-218688" [5a19eab1-194c-4d33-9aa6-5cce8ba87a10] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:15:35.680682  396996 system_pods.go:61] "kube-proxy-tlj9s" [3ff684af-caff-4db8-991a-8ba99fe5f326] Running
	I1210 06:15:35.680687  396996 system_pods.go:61] "kube-scheduler-newest-cni-218688" [8063cc2c-8c98-4490-94af-1613e4881229] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:15:35.680698  396996 system_pods.go:61] "storage-provisioner" [a10bfb27-694c-4654-a067-8f36fe743de7] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1210 06:15:35.680705  396996 system_pods.go:74] duration metric: took 3.328176ms to wait for pod list to return data ...
	I1210 06:15:35.680714  396996 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:15:35.682837  396996 default_sa.go:45] found service account: "default"
	I1210 06:15:35.682855  396996 default_sa.go:55] duration metric: took 2.134837ms for default service account to be created ...
	I1210 06:15:35.682865  396996 kubeadm.go:587] duration metric: took 2.695675575s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 06:15:35.682879  396996 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:15:35.684913  396996 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:15:35.684939  396996 node_conditions.go:123] node cpu capacity is 8
	I1210 06:15:35.684951  396996 node_conditions.go:105] duration metric: took 2.068174ms to run NodePressure ...
	I1210 06:15:35.684962  396996 start.go:242] waiting for startup goroutines ...
	I1210 06:15:35.684968  396996 start.go:247] waiting for cluster config update ...
	I1210 06:15:35.684977  396996 start.go:256] writing updated cluster config ...
	I1210 06:15:35.685255  396996 ssh_runner.go:195] Run: rm -f paused
	I1210 06:15:35.731197  396996 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-rc.1 (minor skew: 1)
	I1210 06:15:35.733185  396996 out.go:179] * Done! kubectl is now configured to use "newest-cni-218688" cluster and "default" namespace by default
	W1210 06:15:33.147258  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	W1210 06:15:35.148317  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	I1210 06:15:34.458179  398989 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-125336" ...
	I1210 06:15:34.458256  398989 cli_runner.go:164] Run: docker start default-k8s-diff-port-125336
	I1210 06:15:34.606122  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:34.751260  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:34.755727  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:34.772295  398989 kic.go:430] container "default-k8s-diff-port-125336" state is running.
	I1210 06:15:34.772778  398989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-125336
	I1210 06:15:34.795691  398989 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/config.json ...
	I1210 06:15:34.795975  398989 machine.go:94] provisionDockerMachine start ...
	I1210 06:15:34.796067  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:34.815579  398989 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:34.815958  398989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1210 06:15:34.815979  398989 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:15:34.816656  398989 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48068->127.0.0.1:33138: read: connection reset by peer
	I1210 06:15:34.895700  398989 cache.go:107] acquiring lock: {Name:mk0763a50664c56b0862900e71862307cba94d4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895740  398989 cache.go:107] acquiring lock: {Name:mkdd768341d1a3481ecaec697219b32d4a715834 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895735  398989 cache.go:107] acquiring lock: {Name:mkd670cede0997c7eb0e9bd388a82e1cb2741031 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895776  398989 cache.go:107] acquiring lock: {Name:mk4d792f4bac33dc8779d7cc5ff40393c94e0ea4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895776  398989 cache.go:107] acquiring lock: {Name:mkc3a95f67321b2fa8faeb966829fb60cf65d25d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895817  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:15:34.895824  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:15:34.895828  398989 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 146.45µs
	I1210 06:15:34.895834  398989 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 128.77µs
	I1210 06:15:34.895694  398989 cache.go:107] acquiring lock: {Name:mkcb073544c2d92de0e0765e38c37b4f4d2ac46b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895843  398989 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:15:34.895840  398989 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:15:34.895852  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 exists
	I1210 06:15:34.895700  398989 cache.go:107] acquiring lock: {Name:mk4839690ba979036496a7cee1de2814aaad3bf0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895863  398989 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3" took 181.132µs
	I1210 06:15:34.895880  398989 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 succeeded
	I1210 06:15:34.895908  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 exists
	I1210 06:15:34.895899  398989 cache.go:107] acquiring lock: {Name:mk796942baeaa838a47daad2be5ca7532234da42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:15:34.895924  398989 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3" took 255.105µs
	I1210 06:15:34.895929  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 exists
	I1210 06:15:34.895932  398989 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 succeeded
	I1210 06:15:34.895908  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 exists
	I1210 06:15:34.895944  398989 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3" took 265.291µs
	I1210 06:15:34.895951  398989 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3" took 211.334µs
	I1210 06:15:34.895966  398989 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 succeeded
	I1210 06:15:34.895972  398989 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 succeeded
	I1210 06:15:34.895982  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1210 06:15:34.895990  398989 cache.go:115] /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1210 06:15:34.895996  398989 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 258.502µs
	I1210 06:15:34.895999  398989 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 139.654µs
	I1210 06:15:34.896008  398989 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1210 06:15:34.896011  398989 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22094-5725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1210 06:15:34.896019  398989 cache.go:87] Successfully saved all images to host disk.
	I1210 06:15:37.959177  398989 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-125336
	
	I1210 06:15:37.959204  398989 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-125336"
	I1210 06:15:37.959258  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:37.979224  398989 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:37.979665  398989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1210 06:15:37.979696  398989 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-125336 && echo "default-k8s-diff-port-125336" | sudo tee /etc/hostname
	I1210 06:15:38.128128  398989 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-125336
	
	I1210 06:15:38.128197  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:38.146305  398989 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:38.146620  398989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1210 06:15:38.146653  398989 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-125336' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-125336/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-125336' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:15:38.278124  398989 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:15:38.278149  398989 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-5725/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-5725/.minikube}
	I1210 06:15:38.278167  398989 ubuntu.go:190] setting up certificates
	I1210 06:15:38.278176  398989 provision.go:84] configureAuth start
	I1210 06:15:38.278222  398989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-125336
	I1210 06:15:38.296606  398989 provision.go:143] copyHostCerts
	I1210 06:15:38.296674  398989 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem, removing ...
	I1210 06:15:38.296692  398989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem
	I1210 06:15:38.296785  398989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/ca.pem (1078 bytes)
	I1210 06:15:38.296919  398989 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem, removing ...
	I1210 06:15:38.296932  398989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem
	I1210 06:15:38.296972  398989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/cert.pem (1123 bytes)
	I1210 06:15:38.297072  398989 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem, removing ...
	I1210 06:15:38.297098  398989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem
	I1210 06:15:38.297140  398989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-5725/.minikube/key.pem (1679 bytes)
	I1210 06:15:38.297233  398989 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-125336 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-125336 localhost minikube]
	I1210 06:15:38.401725  398989 provision.go:177] copyRemoteCerts
	I1210 06:15:38.401781  398989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:15:38.401814  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:38.419489  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:38.515784  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:15:38.532680  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1210 06:15:38.549493  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:15:38.565601  398989 provision.go:87] duration metric: took 287.41ms to configureAuth
	I1210 06:15:38.565627  398989 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:15:38.565820  398989 config.go:182] Loaded profile config "default-k8s-diff-port-125336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:15:38.565943  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:38.583842  398989 main.go:143] libmachine: Using SSH client type: native
	I1210 06:15:38.584037  398989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1210 06:15:38.584055  398989 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:15:38.911289  398989 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:15:38.911317  398989 machine.go:97] duration metric: took 4.115324474s to provisionDockerMachine
	I1210 06:15:38.911331  398989 start.go:293] postStartSetup for "default-k8s-diff-port-125336" (driver="docker")
	I1210 06:15:38.911344  398989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:15:38.911417  398989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:15:38.911463  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:38.932694  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:39.032024  398989 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:15:39.035849  398989 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:15:39.035874  398989 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:15:39.035883  398989 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/addons for local assets ...
	I1210 06:15:39.035933  398989 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-5725/.minikube/files for local assets ...
	I1210 06:15:39.036028  398989 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem -> 92532.pem in /etc/ssl/certs
	I1210 06:15:39.036160  398989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:15:39.044513  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:15:39.061424  398989 start.go:296] duration metric: took 150.067555ms for postStartSetup
	I1210 06:15:39.061507  398989 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:15:39.061554  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:39.080318  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:39.174412  398989 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:15:39.179699  398989 fix.go:56] duration metric: took 4.748715142s for fixHost
	I1210 06:15:39.179726  398989 start.go:83] releasing machines lock for "default-k8s-diff-port-125336", held for 4.748759367s
	I1210 06:15:39.179795  398989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-125336
	I1210 06:15:39.198657  398989 ssh_runner.go:195] Run: cat /version.json
	I1210 06:15:39.198712  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:39.198747  398989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:15:39.198819  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:39.220204  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:39.220241  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:39.317475  398989 ssh_runner.go:195] Run: systemctl --version
	I1210 06:15:39.391108  398989 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:15:39.430876  398989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:15:39.435737  398989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:15:39.435812  398989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:15:39.444134  398989 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:15:39.444154  398989 start.go:496] detecting cgroup driver to use...
	I1210 06:15:39.444185  398989 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:15:39.444220  398989 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:15:39.458418  398989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:15:39.470158  398989 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:15:39.470210  398989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:15:39.485432  398989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:15:39.497705  398989 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:15:39.587848  398989 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:15:39.679325  398989 docker.go:234] disabling docker service ...
	I1210 06:15:39.679390  398989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:15:39.695744  398989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:15:39.710121  398989 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:15:39.803290  398989 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:15:39.889666  398989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:15:39.901841  398989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:15:39.916001  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:40.053859  398989 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:15:40.053907  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.064032  398989 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:15:40.064119  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.074052  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.082799  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.091069  398989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:15:40.099125  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.108348  398989 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.116442  398989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:15:40.124562  398989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:15:40.131659  398989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:15:40.139831  398989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:15:40.235238  398989 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:15:40.390045  398989 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:15:40.390127  398989 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:15:40.394019  398989 start.go:564] Will wait 60s for crictl version
	I1210 06:15:40.394073  398989 ssh_runner.go:195] Run: which crictl
	I1210 06:15:40.397521  398989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:15:40.422130  398989 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:15:40.422196  398989 ssh_runner.go:195] Run: crio --version
	I1210 06:15:40.449888  398989 ssh_runner.go:195] Run: crio --version
	I1210 06:15:40.482873  398989 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1210 06:15:40.484109  398989 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-125336 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:15:40.504017  398989 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1210 06:15:40.508495  398989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:15:40.519961  398989 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-125336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:15:40.520150  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:40.655009  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:40.788669  398989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 06:15:40.920137  398989 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 06:15:40.920210  398989 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:15:40.955931  398989 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:15:40.955957  398989 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:15:40.955966  398989 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.3 crio true true} ...
	I1210 06:15:40.956107  398989 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-125336 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:15:40.956192  398989 ssh_runner.go:195] Run: crio config
	I1210 06:15:41.004526  398989 cni.go:84] Creating CNI manager for ""
	I1210 06:15:41.004548  398989 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:15:41.004564  398989 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:15:41.004584  398989 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-125336 NodeName:default-k8s-diff-port-125336 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:15:41.004697  398989 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-125336"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:15:41.004752  398989 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1210 06:15:41.013662  398989 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:15:41.013711  398989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:15:41.021680  398989 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1210 06:15:41.034639  398989 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 06:15:41.047897  398989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1210 06:15:41.060681  398989 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:15:41.064298  398989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:15:41.074539  398989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:15:41.167815  398989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:15:41.192312  398989 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336 for IP: 192.168.103.2
	I1210 06:15:41.192334  398989 certs.go:195] generating shared ca certs ...
	I1210 06:15:41.192367  398989 certs.go:227] acquiring lock for ca certs: {Name:mka90f54d579d39a8508aa46a6cef002ccad5d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:41.192505  398989 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key
	I1210 06:15:41.192546  398989 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key
	I1210 06:15:41.192557  398989 certs.go:257] generating profile certs ...
	I1210 06:15:41.192643  398989 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/client.key
	I1210 06:15:41.192694  398989 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/apiserver.key.75b93134
	I1210 06:15:41.192729  398989 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/proxy-client.key
	I1210 06:15:41.192855  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253.pem (1338 bytes)
	W1210 06:15:41.192897  398989 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253_empty.pem, impossibly tiny 0 bytes
	I1210 06:15:41.192910  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:15:41.192952  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:15:41.192986  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:15:41.193016  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/certs/key.pem (1679 bytes)
	I1210 06:15:41.193074  398989 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem (1708 bytes)
	I1210 06:15:41.193841  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:15:41.212216  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:15:41.230779  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:15:41.249215  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:15:41.273141  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1210 06:15:41.291653  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:15:41.308892  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:15:41.328983  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/default-k8s-diff-port-125336/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 06:15:41.348815  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/certs/9253.pem --> /usr/share/ca-certificates/9253.pem (1338 bytes)
	I1210 06:15:41.369178  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/ssl/certs/92532.pem --> /usr/share/ca-certificates/92532.pem (1708 bytes)
	I1210 06:15:41.390044  398989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:15:41.407887  398989 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:15:41.422822  398989 ssh_runner.go:195] Run: openssl version
	I1210 06:15:41.430217  398989 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:41.438931  398989 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:15:41.447682  398989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:41.451942  398989 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:41.451995  398989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:15:41.496117  398989 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:15:41.504580  398989 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9253.pem
	I1210 06:15:41.512960  398989 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9253.pem /etc/ssl/certs/9253.pem
	I1210 06:15:41.521564  398989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9253.pem
	I1210 06:15:41.525244  398989 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:37 /usr/share/ca-certificates/9253.pem
	I1210 06:15:41.525308  398989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9253.pem
	I1210 06:15:41.564172  398989 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:15:41.572852  398989 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/92532.pem
	I1210 06:15:41.580900  398989 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/92532.pem /etc/ssl/certs/92532.pem
	I1210 06:15:41.588301  398989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92532.pem
	I1210 06:15:41.592675  398989 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:37 /usr/share/ca-certificates/92532.pem
	I1210 06:15:41.592721  398989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92532.pem
	I1210 06:15:41.637108  398989 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:15:41.645490  398989 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:15:41.649879  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:15:41.690638  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:15:41.747836  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:15:41.800228  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:15:41.862694  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:15:41.914250  398989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:15:41.958747  398989 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-125336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-125336 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:15:41.959041  398989 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:15:41.959166  398989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:15:41.998590  398989 cri.go:89] found id: "92cdc11606d33aee3d477bf6cbe4ab80332206fde18c217d524f557e526b0285"
	I1210 06:15:41.998610  398989 cri.go:89] found id: "2dded97e81369efefb822c9b0c8d6dfd3bbd053fe93054ad3a81cdce1d76f368"
	I1210 06:15:41.998616  398989 cri.go:89] found id: "355b450a39b31a387be491afe63facd495d64617f6108b0a4b1b5123f1758d16"
	I1210 06:15:41.998621  398989 cri.go:89] found id: "4492dccb6c585536103a7303143f56d37e8a4fcd9cebebf3e45723b510e06b9d"
	I1210 06:15:41.998625  398989 cri.go:89] found id: ""
	I1210 06:15:41.998665  398989 ssh_runner.go:195] Run: sudo runc list -f json
	W1210 06:15:42.012230  398989 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:15:42Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:15:42.012308  398989 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:15:42.023047  398989 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:15:42.023062  398989 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:15:42.023133  398989 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:15:42.032028  398989 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:15:42.033327  398989 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-125336" does not appear in /home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:15:42.034299  398989 kubeconfig.go:62] /home/jenkins/minikube-integration/22094-5725/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-125336" cluster setting kubeconfig missing "default-k8s-diff-port-125336" context setting]
	I1210 06:15:42.035703  398989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/kubeconfig: {Name:mkfa60e97179780abb80cddf99075aed36301884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:42.037888  398989 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:15:42.047350  398989 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1210 06:15:42.047375  398989 kubeadm.go:602] duration metric: took 24.306597ms to restartPrimaryControlPlane
	I1210 06:15:42.047383  398989 kubeadm.go:403] duration metric: took 88.644178ms to StartCluster
	I1210 06:15:42.047399  398989 settings.go:142] acquiring lock: {Name:mk8c38e27b37253ca8cb2a2adf6342f0db270902 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:42.047471  398989 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:15:42.049858  398989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-5725/kubeconfig: {Name:mkfa60e97179780abb80cddf99075aed36301884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:15:42.050141  398989 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:15:42.050363  398989 config.go:182] Loaded profile config "default-k8s-diff-port-125336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:15:42.050409  398989 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:15:42.050484  398989 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-125336"
	I1210 06:15:42.050502  398989 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-125336"
	W1210 06:15:42.050511  398989 addons.go:248] addon storage-provisioner should already be in state true
	I1210 06:15:42.050535  398989 host.go:66] Checking if "default-k8s-diff-port-125336" exists ...
	I1210 06:15:42.051015  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:42.051175  398989 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-125336"
	I1210 06:15:42.051191  398989 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-125336"
	W1210 06:15:42.051199  398989 addons.go:248] addon dashboard should already be in state true
	I1210 06:15:42.051223  398989 host.go:66] Checking if "default-k8s-diff-port-125336" exists ...
	I1210 06:15:42.051559  398989 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-125336"
	I1210 06:15:42.051618  398989 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-125336"
	I1210 06:15:42.051583  398989 out.go:179] * Verifying Kubernetes components...
	I1210 06:15:42.051661  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:42.051950  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:42.053296  398989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:15:42.082195  398989 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:15:42.082199  398989 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 06:15:42.083378  398989 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:15:42.083403  398989 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1210 06:15:37.646711  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	W1210 06:15:39.648042  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	W1210 06:15:41.648596  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	I1210 06:15:42.083414  398989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:15:42.083554  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:42.086520  398989 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-125336"
	W1210 06:15:42.086542  398989 addons.go:248] addon default-storageclass should already be in state true
	I1210 06:15:42.086569  398989 host.go:66] Checking if "default-k8s-diff-port-125336" exists ...
	I1210 06:15:42.086807  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 06:15:42.086824  398989 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 06:15:42.086879  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:42.088501  398989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-125336 --format={{.State.Status}}
	I1210 06:15:42.127971  398989 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:15:42.127995  398989 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:15:42.128058  398989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-125336
	I1210 06:15:42.131157  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:42.131148  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:42.163643  398989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/default-k8s-diff-port-125336/id_rsa Username:docker}
	I1210 06:15:42.238425  398989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:15:42.261214  398989 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-125336" to be "Ready" ...
	I1210 06:15:42.266856  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 06:15:42.266878  398989 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 06:15:42.273292  398989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:15:42.296500  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 06:15:42.296642  398989 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 06:15:42.316168  398989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:15:42.322727  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 06:15:42.322747  398989 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 06:15:42.342110  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 06:15:42.342132  398989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 06:15:42.364017  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 06:15:42.364037  398989 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 06:15:42.383601  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 06:15:42.383628  398989 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 06:15:42.400222  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 06:15:42.400267  398989 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 06:15:42.413822  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 06:15:42.413841  398989 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 06:15:42.428985  398989 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:15:42.429002  398989 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 06:15:42.445006  398989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:15:43.730716  398989 node_ready.go:49] node "default-k8s-diff-port-125336" is "Ready"
	I1210 06:15:43.730761  398989 node_ready.go:38] duration metric: took 1.469517861s for node "default-k8s-diff-port-125336" to be "Ready" ...
	I1210 06:15:43.730780  398989 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:15:43.730833  398989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:15:44.295467  398989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.022145188s)
	I1210 06:15:44.295527  398989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.979317742s)
	I1210 06:15:44.295605  398989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.850565953s)
	I1210 06:15:44.295731  398989 api_server.go:72] duration metric: took 2.245559846s to wait for apiserver process to appear ...
	I1210 06:15:44.295748  398989 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:15:44.295770  398989 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1210 06:15:44.297453  398989 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-125336 addons enable metrics-server
	
	I1210 06:15:44.301230  398989 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:44.301258  398989 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:44.307227  398989 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1210 06:15:44.147036  389191 pod_ready.go:104] pod "coredns-66bc5c9577-8xwfc" is not "Ready", error: <nil>
	I1210 06:15:46.146566  389191 pod_ready.go:94] pod "coredns-66bc5c9577-8xwfc" is "Ready"
	I1210 06:15:46.146592  389191 pod_ready.go:86] duration metric: took 37.005340048s for pod "coredns-66bc5c9577-8xwfc" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:46.149120  389191 pod_ready.go:83] waiting for pod "etcd-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:46.152937  389191 pod_ready.go:94] pod "etcd-embed-certs-028500" is "Ready"
	I1210 06:15:46.152956  389191 pod_ready.go:86] duration metric: took 3.81638ms for pod "etcd-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:46.154886  389191 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:46.158540  389191 pod_ready.go:94] pod "kube-apiserver-embed-certs-028500" is "Ready"
	I1210 06:15:46.158566  389191 pod_ready.go:86] duration metric: took 3.65933ms for pod "kube-apiserver-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:46.160461  389191 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:46.345207  389191 pod_ready.go:94] pod "kube-controller-manager-embed-certs-028500" is "Ready"
	I1210 06:15:46.345232  389191 pod_ready.go:86] duration metric: took 184.75138ms for pod "kube-controller-manager-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:46.545176  389191 pod_ready.go:83] waiting for pod "kube-proxy-sr7kh" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:46.945367  389191 pod_ready.go:94] pod "kube-proxy-sr7kh" is "Ready"
	I1210 06:15:46.945391  389191 pod_ready.go:86] duration metric: took 400.193359ms for pod "kube-proxy-sr7kh" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:47.145257  389191 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:47.544937  389191 pod_ready.go:94] pod "kube-scheduler-embed-certs-028500" is "Ready"
	I1210 06:15:47.544958  389191 pod_ready.go:86] duration metric: took 399.673562ms for pod "kube-scheduler-embed-certs-028500" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:15:47.544969  389191 pod_ready.go:40] duration metric: took 38.406618977s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:15:47.594190  389191 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1210 06:15:47.595325  389191 out.go:179] * Done! kubectl is now configured to use "embed-certs-028500" cluster and "default" namespace by default
	I1210 06:15:44.308766  398989 addons.go:530] duration metric: took 2.258355424s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1210 06:15:44.795874  398989 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1210 06:15:44.800857  398989 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:15:44.800883  398989 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:15:45.296231  398989 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1210 06:15:45.301136  398989 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1210 06:15:45.302322  398989 api_server.go:141] control plane version: v1.34.3
	I1210 06:15:45.302347  398989 api_server.go:131] duration metric: took 1.006591687s to wait for apiserver health ...
	I1210 06:15:45.302357  398989 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:15:45.306315  398989 system_pods.go:59] 8 kube-system pods found
	I1210 06:15:45.306352  398989 system_pods.go:61] "coredns-66bc5c9577-gkk6m" [0b83f27c-1359-488f-bf61-c716f522dfad] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:15:45.306367  398989 system_pods.go:61] "etcd-default-k8s-diff-port-125336" [afbeb479-99ed-44cd-b9c3-cda0c638c270] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:15:45.306382  398989 system_pods.go:61] "kindnet-lfds9" [14d4cc08-bd99-41e5-a772-b5197e8b16b6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1210 06:15:45.306398  398989 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-125336" [12a3028f-5f91-4217-bff2-527a5c4a0b4d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:15:45.306414  398989 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-125336" [ee445b76-6256-4d08-a12d-c392acecca93] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:15:45.306429  398989 system_pods.go:61] "kube-proxy-mw5sp" [94c4f93c-3851-4ed9-ae3b-7900e64abf9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 06:15:45.306439  398989 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-125336" [f045b3cd-f095-44a0-9735-47a085eb5d83] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:15:45.306446  398989 system_pods.go:61] "storage-provisioner" [d31f981a-faff-40fd-87cd-c2e5b25f8e2a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:15:45.306457  398989 system_pods.go:74] duration metric: took 4.090626ms to wait for pod list to return data ...
	I1210 06:15:45.306469  398989 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:15:45.309065  398989 default_sa.go:45] found service account: "default"
	I1210 06:15:45.309111  398989 default_sa.go:55] duration metric: took 2.635327ms for default service account to be created ...
	I1210 06:15:45.309121  398989 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 06:15:45.312161  398989 system_pods.go:86] 8 kube-system pods found
	I1210 06:15:45.312188  398989 system_pods.go:89] "coredns-66bc5c9577-gkk6m" [0b83f27c-1359-488f-bf61-c716f522dfad] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:15:45.312199  398989 system_pods.go:89] "etcd-default-k8s-diff-port-125336" [afbeb479-99ed-44cd-b9c3-cda0c638c270] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:15:45.312211  398989 system_pods.go:89] "kindnet-lfds9" [14d4cc08-bd99-41e5-a772-b5197e8b16b6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1210 06:15:45.312295  398989 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-125336" [12a3028f-5f91-4217-bff2-527a5c4a0b4d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:15:45.312334  398989 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-125336" [ee445b76-6256-4d08-a12d-c392acecca93] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:15:45.312348  398989 system_pods.go:89] "kube-proxy-mw5sp" [94c4f93c-3851-4ed9-ae3b-7900e64abf9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 06:15:45.312364  398989 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-125336" [f045b3cd-f095-44a0-9735-47a085eb5d83] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:15:45.312380  398989 system_pods.go:89] "storage-provisioner" [d31f981a-faff-40fd-87cd-c2e5b25f8e2a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:15:45.312393  398989 system_pods.go:126] duration metric: took 3.26398ms to wait for k8s-apps to be running ...
	I1210 06:15:45.312421  398989 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 06:15:45.312464  398989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:15:45.330746  398989 system_svc.go:56] duration metric: took 18.317711ms WaitForService to wait for kubelet
	I1210 06:15:45.330808  398989 kubeadm.go:587] duration metric: took 3.280637081s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:15:45.330849  398989 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:15:45.333665  398989 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:15:45.333690  398989 node_conditions.go:123] node cpu capacity is 8
	I1210 06:15:45.333707  398989 node_conditions.go:105] duration metric: took 2.852028ms to run NodePressure ...
	I1210 06:15:45.333720  398989 start.go:242] waiting for startup goroutines ...
	I1210 06:15:45.333730  398989 start.go:247] waiting for cluster config update ...
	I1210 06:15:45.333744  398989 start.go:256] writing updated cluster config ...
	I1210 06:15:45.334096  398989 ssh_runner.go:195] Run: rm -f paused
	I1210 06:15:45.338120  398989 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:15:45.341568  398989 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gkk6m" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 06:15:47.347196  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:15:49.347509  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:15:51.348265  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:15:53.847175  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:15:56.347151  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:15:58.846930  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:16:01.346818  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:16:03.847144  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:16:06.346268  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:16:08.346933  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:16:10.848695  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:16:13.346002  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:16:15.346439  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:16:17.346561  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	W1210 06:16:19.845877  398989 pod_ready.go:104] pod "coredns-66bc5c9577-gkk6m" is not "Ready", error: <nil>
	I1210 06:16:21.846025  398989 pod_ready.go:94] pod "coredns-66bc5c9577-gkk6m" is "Ready"
	I1210 06:16:21.846050  398989 pod_ready.go:86] duration metric: took 36.504462899s for pod "coredns-66bc5c9577-gkk6m" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:16:21.848259  398989 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-125336" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:16:21.851608  398989 pod_ready.go:94] pod "etcd-default-k8s-diff-port-125336" is "Ready"
	I1210 06:16:21.851627  398989 pod_ready.go:86] duration metric: took 3.347943ms for pod "etcd-default-k8s-diff-port-125336" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:16:21.853330  398989 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-125336" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:16:21.856567  398989 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-125336" is "Ready"
	I1210 06:16:21.856584  398989 pod_ready.go:86] duration metric: took 3.238739ms for pod "kube-apiserver-default-k8s-diff-port-125336" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:16:21.858225  398989 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-125336" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:16:22.044224  398989 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-125336" is "Ready"
	I1210 06:16:22.044251  398989 pod_ready.go:86] duration metric: took 186.009559ms for pod "kube-controller-manager-default-k8s-diff-port-125336" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:16:22.244699  398989 pod_ready.go:83] waiting for pod "kube-proxy-mw5sp" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:16:22.644027  398989 pod_ready.go:94] pod "kube-proxy-mw5sp" is "Ready"
	I1210 06:16:22.644053  398989 pod_ready.go:86] duration metric: took 399.322725ms for pod "kube-proxy-mw5sp" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:16:22.844992  398989 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-125336" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:16:23.244069  398989 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-125336" is "Ready"
	I1210 06:16:23.244122  398989 pod_ready.go:86] duration metric: took 399.106623ms for pod "kube-scheduler-default-k8s-diff-port-125336" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:16:23.244133  398989 pod_ready.go:40] duration metric: took 37.90598909s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:16:23.286217  398989 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1210 06:16:23.287910  398989 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-125336" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 06:15:55 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:15:55.139323378Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 06:15:55 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:15:55.142385274Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 06:15:55 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:15:55.142408656Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 06:16:12 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:12.281263943Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2abff1d7-3805-4426-b796-9e98a9840237 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:16:12 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:12.28216073Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4eb91fd3-19cf-4c38-9203-a391d210557d name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:16:12 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:12.283135201Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-22cr4/dashboard-metrics-scraper" id=f8b57e94-d40b-4faf-a34f-cb300d83c405 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:16:12 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:12.283275079Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:16:12 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:12.288650034Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:16:12 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:12.289109378Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:16:12 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:12.31567174Z" level=info msg="Created container 62a08a36d08daed6e588bb1c0c295b57b19f2241aeb341608a013625741caae5: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-22cr4/dashboard-metrics-scraper" id=f8b57e94-d40b-4faf-a34f-cb300d83c405 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:16:12 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:12.316201931Z" level=info msg="Starting container: 62a08a36d08daed6e588bb1c0c295b57b19f2241aeb341608a013625741caae5" id=d2e1c792-d5ea-4035-99b0-2e56e134938d name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:16:12 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:12.317830236Z" level=info msg="Started container" PID=1755 containerID=62a08a36d08daed6e588bb1c0c295b57b19f2241aeb341608a013625741caae5 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-22cr4/dashboard-metrics-scraper id=d2e1c792-d5ea-4035-99b0-2e56e134938d name=/runtime.v1.RuntimeService/StartContainer sandboxID=4a7a83ecfa1b79f71e998c7f947ce050845136820524f6971b9a0b6a6cf1652e
	Dec 10 06:16:12 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:12.395602908Z" level=info msg="Removing container: 8065ccb28c02c2e61eb0bae17d5b495be2a745ba630030b70be1cc6f54a5361b" id=26511f8e-494d-4de7-bc16-3c575821f6b5 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:16:12 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:12.405120703Z" level=info msg="Removed container 8065ccb28c02c2e61eb0bae17d5b495be2a745ba630030b70be1cc6f54a5361b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-22cr4/dashboard-metrics-scraper" id=26511f8e-494d-4de7-bc16-3c575821f6b5 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:16:15 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:15.403621758Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=083d919f-79ad-4997-be05-617c36fcd009 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:16:15 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:15.404554407Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b37a0a9f-f1d6-4eb3-a724-73ba9ac7e514 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:16:15 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:15.405648994Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=f3254d85-5736-4d45-8d2e-436ec5ebd790 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:16:15 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:15.40587892Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:16:15 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:15.41142929Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:16:15 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:15.411625833Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/f3d26cb1f287ad5fb35cea3469386736dc484e18dd680b5f260ee19cc4aea704/merged/etc/passwd: no such file or directory"
	Dec 10 06:16:15 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:15.41166157Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f3d26cb1f287ad5fb35cea3469386736dc484e18dd680b5f260ee19cc4aea704/merged/etc/group: no such file or directory"
	Dec 10 06:16:15 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:15.411960713Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:16:15 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:15.44277752Z" level=info msg="Created container cca3ae445cd55d0ae0acad0517846ebb38a9d80446a5793c2115b59a45a3c93f: kube-system/storage-provisioner/storage-provisioner" id=f3254d85-5736-4d45-8d2e-436ec5ebd790 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:16:15 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:15.443324706Z" level=info msg="Starting container: cca3ae445cd55d0ae0acad0517846ebb38a9d80446a5793c2115b59a45a3c93f" id=85e98f9f-8003-46c7-a217-a9d3b3768951 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:16:15 default-k8s-diff-port-125336 crio[564]: time="2025-12-10T06:16:15.44514755Z" level=info msg="Started container" PID=1772 containerID=cca3ae445cd55d0ae0acad0517846ebb38a9d80446a5793c2115b59a45a3c93f description=kube-system/storage-provisioner/storage-provisioner id=85e98f9f-8003-46c7-a217-a9d3b3768951 name=/runtime.v1.RuntimeService/StartContainer sandboxID=571c423b375318362761373f92e5929ec59453acc866deb9de4641db3bcee7c7
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	cca3ae445cd55       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   571c423b37531       storage-provisioner                                    kube-system
	62a08a36d08da       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           28 seconds ago      Exited              dashboard-metrics-scraper   2                   4a7a83ecfa1b7       dashboard-metrics-scraper-6ffb444bf9-22cr4             kubernetes-dashboard
	164632a10922a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   47 seconds ago      Running             kubernetes-dashboard        0                   cf9bd9d5dec16       kubernetes-dashboard-855c9754f9-ccjtq                  kubernetes-dashboard
	a43f1b12bb382       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   7a4151a9a9ba4       busybox                                                default
	d008c80af5289       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           55 seconds ago      Running             coredns                     0                   24c7af7f3a846       coredns-66bc5c9577-gkk6m                               kube-system
	4cae6db5d58d5       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   51bfcadda644c       kindnet-lfds9                                          kube-system
	9adb58aed15d4       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                           55 seconds ago      Running             kube-proxy                  0                   196964bcf9837       kube-proxy-mw5sp                                       kube-system
	8ed0496d0be7e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   571c423b37531       storage-provisioner                                    kube-system
	92cdc11606d33       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                           58 seconds ago      Running             kube-apiserver              0                   d68383e6a1e35       kube-apiserver-default-k8s-diff-port-125336            kube-system
	2dded97e81369       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                           58 seconds ago      Running             kube-scheduler              0                   36c6b7cd8ae45       kube-scheduler-default-k8s-diff-port-125336            kube-system
	355b450a39b31       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           58 seconds ago      Running             etcd                        0                   81a3c85adf9f1       etcd-default-k8s-diff-port-125336                      kube-system
	4492dccb6c585       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                           58 seconds ago      Running             kube-controller-manager     0                   8c54dcd4d13a7       kube-controller-manager-default-k8s-diff-port-125336   kube-system
	
	
	==> coredns [d008c80af528911395e273d86d6218ebc4d984547613f7aacea14e288dffe717] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54972 - 8444 "HINFO IN 8248547206015904360.4558364408995920036. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.516199534s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-125336
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-125336
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602
	                    minikube.k8s.io/name=default-k8s-diff-port-125336
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_14_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:14:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-125336
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:16:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:16:24 +0000   Wed, 10 Dec 2025 06:14:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:16:24 +0000   Wed, 10 Dec 2025 06:14:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:16:24 +0000   Wed, 10 Dec 2025 06:14:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 06:16:24 +0000   Wed, 10 Dec 2025 06:15:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-125336
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                f4329173-01c3-494e-8c73-1314ca67fddf
	  Boot ID:                    b1b789e7-29ca-41f0-9541-8c4ef16372aa
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-gkk6m                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     113s
	  kube-system                 etcd-default-k8s-diff-port-125336                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         119s
	  kube-system                 kindnet-lfds9                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-default-k8s-diff-port-125336             250m (3%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-125336    200m (2%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-mw5sp                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-default-k8s-diff-port-125336             100m (1%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-22cr4              0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-ccjtq                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 112s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  119s               kubelet          Node default-k8s-diff-port-125336 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s               kubelet          Node default-k8s-diff-port-125336 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s               kubelet          Node default-k8s-diff-port-125336 status is now: NodeHasSufficientPID
	  Normal  Starting                 119s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           114s               node-controller  Node default-k8s-diff-port-125336 event: Registered Node default-k8s-diff-port-125336 in Controller
	  Normal  NodeReady                100s               kubelet          Node default-k8s-diff-port-125336 status is now: NodeReady
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)  kubelet          Node default-k8s-diff-port-125336 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)  kubelet          Node default-k8s-diff-port-125336 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)  kubelet          Node default-k8s-diff-port-125336 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                node-controller  Node default-k8s-diff-port-125336 event: Registered Node default-k8s-diff-port-125336 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e ac 6a 3a 10 14 08 06
	[  +0.000389] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e1 45 1e 59 dc 08 06
	[ +12.231886] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff aa b6 c3 b5 b8 e1 08 06
	[  +0.018522] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff f6 5b 96 ba 91 6c 08 06
	[Dec10 06:13] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 91 ba 36 9a ae 08 06
	[  +0.002987] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 7f a1 c5 f7 73 08 06
	[  +1.205570] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 04 b2 ab d7 49 08 06
	[  +4.623767] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 10 2d 23 5f e6 08 06
	[  +0.000315] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 5b 96 ba 91 6c 08 06
	[ +12.537493] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 fa d0 2a 46 66 08 06
	[  +0.000395] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 16 04 b2 ab d7 49 08 06
	[ +31.413502] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 1b 61 8f e3 57 08 06
	[  +0.000352] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fa 91 ba 36 9a ae 08 06
	
	
	==> etcd [355b450a39b31a387be491afe63facd495d64617f6108b0a4b1b5123f1758d16] <==
	{"level":"warn","ts":"2025-12-10T06:15:43.030143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.040258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.049628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.056462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.063374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.071161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.078986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.086930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.093412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.104204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.111901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.119210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.126035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.132883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.140382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.147777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.155255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.163330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.171418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.179128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.187752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.208723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.216555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.230615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:15:43.288109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60138","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 06:16:40 up 59 min,  0 user,  load average: 2.39, 3.94, 2.85
	Linux default-k8s-diff-port-125336 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4cae6db5d58d5f55296fdd99cb88edcd6d5f157201404a728337fd012e8f1b6e] <==
	I1210 06:15:44.924142       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 06:15:44.924533       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1210 06:15:44.924766       1 main.go:148] setting mtu 1500 for CNI 
	I1210 06:15:44.924789       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 06:15:44.924816       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T06:15:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 06:15:45.124676       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 06:15:45.124706       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 06:15:45.124718       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 06:15:45.124894       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 06:15:45.625509       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 06:15:45.625545       1 metrics.go:72] Registering metrics
	I1210 06:15:45.625649       1 controller.go:711] "Syncing nftables rules"
	I1210 06:15:55.125060       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1210 06:15:55.125153       1 main.go:301] handling current node
	I1210 06:16:05.124973       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1210 06:16:05.125010       1 main.go:301] handling current node
	I1210 06:16:15.124188       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1210 06:16:15.124243       1 main.go:301] handling current node
	I1210 06:16:25.124970       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1210 06:16:25.125016       1 main.go:301] handling current node
	I1210 06:16:35.124255       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1210 06:16:35.124293       1 main.go:301] handling current node
	
	
	==> kube-apiserver [92cdc11606d33aee3d477bf6cbe4ab80332206fde18c217d524f557e526b0285] <==
	I1210 06:15:43.775742       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1210 06:15:43.778061       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1210 06:15:43.778100       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1210 06:15:43.778991       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1210 06:15:43.775230       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1210 06:15:43.781573       1 aggregator.go:171] initial CRD sync complete...
	I1210 06:15:43.781973       1 autoregister_controller.go:144] Starting autoregister controller
	I1210 06:15:43.781981       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 06:15:43.781988       1 cache.go:39] Caches are synced for autoregister controller
	I1210 06:15:43.781648       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1210 06:15:43.793812       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1210 06:15:43.795327       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1210 06:15:43.844374       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:15:43.858364       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1210 06:15:44.069118       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 06:15:44.096650       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 06:15:44.113152       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:15:44.120316       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:15:44.126604       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 06:15:44.157039       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.126.99"}
	I1210 06:15:44.165345       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.217.36"}
	I1210 06:15:44.680570       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 06:15:47.162115       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 06:15:47.262511       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 06:15:47.614195       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4492dccb6c585536103a7303143f56d37e8a4fcd9cebebf3e45723b510e06b9d] <==
	I1210 06:15:47.108821       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 06:15:47.108833       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1210 06:15:47.108841       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1210 06:15:47.108852       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1210 06:15:47.108855       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1210 06:15:47.108821       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1210 06:15:47.109060       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1210 06:15:47.109185       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1210 06:15:47.109293       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1210 06:15:47.109389       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1210 06:15:47.109428       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-125336"
	I1210 06:15:47.109404       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1210 06:15:47.109479       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1210 06:15:47.109558       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1210 06:15:47.110707       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1210 06:15:47.110772       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1210 06:15:47.111403       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1210 06:15:47.111570       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1210 06:15:47.113839       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1210 06:15:47.113912       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 06:15:47.115426       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1210 06:15:47.117684       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1210 06:15:47.119945       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1210 06:15:47.121153       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1210 06:15:47.134570       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [9adb58aed15d4ed422818a0c187aae20692217265ccf3d9f6007cd504c1d8982] <==
	I1210 06:15:44.711050       1 server_linux.go:53] "Using iptables proxy"
	I1210 06:15:44.772999       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 06:15:44.873970       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 06:15:44.874016       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1210 06:15:44.874128       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:15:44.902358       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:15:44.902417       1 server_linux.go:132] "Using iptables Proxier"
	I1210 06:15:44.908045       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:15:44.908558       1 server.go:527] "Version info" version="v1.34.3"
	I1210 06:15:44.908621       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:15:44.910219       1 config.go:200] "Starting service config controller"
	I1210 06:15:44.910292       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:15:44.910246       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:15:44.910359       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:15:44.910262       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:15:44.910418       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:15:44.910364       1 config.go:309] "Starting node config controller"
	I1210 06:15:44.910463       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:15:44.910493       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:15:45.010399       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 06:15:45.010484       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 06:15:45.010536       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [2dded97e81369efefb822c9b0c8d6dfd3bbd053fe93054ad3a81cdce1d76f368] <==
	I1210 06:15:42.983312       1 serving.go:386] Generated self-signed cert in-memory
	I1210 06:15:44.340733       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1210 06:15:44.340756       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:15:44.346476       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1210 06:15:44.346537       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1210 06:15:44.346593       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1210 06:15:44.346605       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:15:44.346612       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1210 06:15:44.346615       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:15:44.347004       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 06:15:44.347028       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1210 06:15:44.447010       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:15:44.447050       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1210 06:15:44.447011       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Dec 10 06:15:47 default-k8s-diff-port-125336 kubelet[715]: I1210 06:15:47.868805     715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/df90f057-bca7-448f-9c97-e9439334019b-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-ccjtq\" (UID: \"df90f057-bca7-448f-9c97-e9439334019b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ccjtq"
	Dec 10 06:15:47 default-k8s-diff-port-125336 kubelet[715]: I1210 06:15:47.868886     715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz97g\" (UniqueName: \"kubernetes.io/projected/df90f057-bca7-448f-9c97-e9439334019b-kube-api-access-xz97g\") pod \"kubernetes-dashboard-855c9754f9-ccjtq\" (UID: \"df90f057-bca7-448f-9c97-e9439334019b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ccjtq"
	Dec 10 06:15:50 default-k8s-diff-port-125336 kubelet[715]: I1210 06:15:50.336631     715 scope.go:117] "RemoveContainer" containerID="380ad23b0672bd065615d1a14119ffb5390b95316c302017dd727738fe16e357"
	Dec 10 06:15:51 default-k8s-diff-port-125336 kubelet[715]: I1210 06:15:51.342044     715 scope.go:117] "RemoveContainer" containerID="380ad23b0672bd065615d1a14119ffb5390b95316c302017dd727738fe16e357"
	Dec 10 06:15:51 default-k8s-diff-port-125336 kubelet[715]: I1210 06:15:51.342385     715 scope.go:117] "RemoveContainer" containerID="8065ccb28c02c2e61eb0bae17d5b495be2a745ba630030b70be1cc6f54a5361b"
	Dec 10 06:15:51 default-k8s-diff-port-125336 kubelet[715]: E1210 06:15:51.342574     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-22cr4_kubernetes-dashboard(543e8691-57ae-481e-9a20-7e195c61596e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-22cr4" podUID="543e8691-57ae-481e-9a20-7e195c61596e"
	Dec 10 06:15:51 default-k8s-diff-port-125336 kubelet[715]: I1210 06:15:51.500715     715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 10 06:15:52 default-k8s-diff-port-125336 kubelet[715]: I1210 06:15:52.347022     715 scope.go:117] "RemoveContainer" containerID="8065ccb28c02c2e61eb0bae17d5b495be2a745ba630030b70be1cc6f54a5361b"
	Dec 10 06:15:52 default-k8s-diff-port-125336 kubelet[715]: E1210 06:15:52.347260     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-22cr4_kubernetes-dashboard(543e8691-57ae-481e-9a20-7e195c61596e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-22cr4" podUID="543e8691-57ae-481e-9a20-7e195c61596e"
	Dec 10 06:15:53 default-k8s-diff-port-125336 kubelet[715]: I1210 06:15:53.360627     715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ccjtq" podStartSLOduration=1.183017937 podStartE2EDuration="6.360605138s" podCreationTimestamp="2025-12-10 06:15:47 +0000 UTC" firstStartedPulling="2025-12-10 06:15:48.067712302 +0000 UTC m=+6.873535688" lastFinishedPulling="2025-12-10 06:15:53.245299497 +0000 UTC m=+12.051122889" observedRunningTime="2025-12-10 06:15:53.360407716 +0000 UTC m=+12.166231112" watchObservedRunningTime="2025-12-10 06:15:53.360605138 +0000 UTC m=+12.166428533"
	Dec 10 06:15:59 default-k8s-diff-port-125336 kubelet[715]: I1210 06:15:59.494497     715 scope.go:117] "RemoveContainer" containerID="8065ccb28c02c2e61eb0bae17d5b495be2a745ba630030b70be1cc6f54a5361b"
	Dec 10 06:15:59 default-k8s-diff-port-125336 kubelet[715]: E1210 06:15:59.494667     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-22cr4_kubernetes-dashboard(543e8691-57ae-481e-9a20-7e195c61596e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-22cr4" podUID="543e8691-57ae-481e-9a20-7e195c61596e"
	Dec 10 06:16:12 default-k8s-diff-port-125336 kubelet[715]: I1210 06:16:12.280709     715 scope.go:117] "RemoveContainer" containerID="8065ccb28c02c2e61eb0bae17d5b495be2a745ba630030b70be1cc6f54a5361b"
	Dec 10 06:16:12 default-k8s-diff-port-125336 kubelet[715]: I1210 06:16:12.394324     715 scope.go:117] "RemoveContainer" containerID="8065ccb28c02c2e61eb0bae17d5b495be2a745ba630030b70be1cc6f54a5361b"
	Dec 10 06:16:12 default-k8s-diff-port-125336 kubelet[715]: I1210 06:16:12.394586     715 scope.go:117] "RemoveContainer" containerID="62a08a36d08daed6e588bb1c0c295b57b19f2241aeb341608a013625741caae5"
	Dec 10 06:16:12 default-k8s-diff-port-125336 kubelet[715]: E1210 06:16:12.394797     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-22cr4_kubernetes-dashboard(543e8691-57ae-481e-9a20-7e195c61596e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-22cr4" podUID="543e8691-57ae-481e-9a20-7e195c61596e"
	Dec 10 06:16:15 default-k8s-diff-port-125336 kubelet[715]: I1210 06:16:15.403211     715 scope.go:117] "RemoveContainer" containerID="8ed0496d0be7e2940a2664370db02c5f77609ff39d181f5c13426a0ee6fa740b"
	Dec 10 06:16:19 default-k8s-diff-port-125336 kubelet[715]: I1210 06:16:19.494189     715 scope.go:117] "RemoveContainer" containerID="62a08a36d08daed6e588bb1c0c295b57b19f2241aeb341608a013625741caae5"
	Dec 10 06:16:19 default-k8s-diff-port-125336 kubelet[715]: E1210 06:16:19.494421     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-22cr4_kubernetes-dashboard(543e8691-57ae-481e-9a20-7e195c61596e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-22cr4" podUID="543e8691-57ae-481e-9a20-7e195c61596e"
	Dec 10 06:16:31 default-k8s-diff-port-125336 kubelet[715]: I1210 06:16:31.280689     715 scope.go:117] "RemoveContainer" containerID="62a08a36d08daed6e588bb1c0c295b57b19f2241aeb341608a013625741caae5"
	Dec 10 06:16:31 default-k8s-diff-port-125336 kubelet[715]: E1210 06:16:31.280950     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-22cr4_kubernetes-dashboard(543e8691-57ae-481e-9a20-7e195c61596e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-22cr4" podUID="543e8691-57ae-481e-9a20-7e195c61596e"
	Dec 10 06:16:35 default-k8s-diff-port-125336 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 06:16:35 default-k8s-diff-port-125336 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 06:16:35 default-k8s-diff-port-125336 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:16:35 default-k8s-diff-port-125336 systemd[1]: kubelet.service: Consumed 1.559s CPU time.
	
	
	==> kubernetes-dashboard [164632a10922a2106f042cad684065136ba79e69def7383698535847ea79adde] <==
	2025/12/10 06:15:53 Using namespace: kubernetes-dashboard
	2025/12/10 06:15:53 Using in-cluster config to connect to apiserver
	2025/12/10 06:15:53 Using secret token for csrf signing
	2025/12/10 06:15:53 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/10 06:15:53 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/10 06:15:53 Successful initial request to the apiserver, version: v1.34.3
	2025/12/10 06:15:53 Generating JWE encryption key
	2025/12/10 06:15:53 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/10 06:15:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/10 06:15:53 Initializing JWE encryption key from synchronized object
	2025/12/10 06:15:53 Creating in-cluster Sidecar client
	2025/12/10 06:15:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 06:15:53 Serving insecurely on HTTP port: 9090
	2025/12/10 06:16:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 06:15:53 Starting overwatch
	
	
	==> storage-provisioner [8ed0496d0be7e2940a2664370db02c5f77609ff39d181f5c13426a0ee6fa740b] <==
	I1210 06:15:44.675101       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1210 06:16:14.678223       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [cca3ae445cd55d0ae0acad0517846ebb38a9d80446a5793c2115b59a45a3c93f] <==
	I1210 06:16:15.456770       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 06:16:15.463305       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 06:16:15.463345       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1210 06:16:15.465196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:16:18.919227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:16:23.179840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:16:26.778196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:16:29.831030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:16:32.853303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:16:32.857205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:16:32.857368       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 06:16:32.857496       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-125336_4b30d9b0-a2b1-463a-bebf-faa373cc0f9b!
	I1210 06:16:32.857501       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8e5ce82f-82e7-4b42-b704-b5ef142d393d", APIVersion:"v1", ResourceVersion:"636", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-125336_4b30d9b0-a2b1-463a-bebf-faa373cc0f9b became leader
	W1210 06:16:32.859740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:16:32.862716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:16:32.958593       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-125336_4b30d9b0-a2b1-463a-bebf-faa373cc0f9b!
	W1210 06:16:34.865511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:16:34.869413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:16:36.872849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:16:36.876579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:16:38.879192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:16:38.884666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-125336 -n default-k8s-diff-port-125336
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-125336 -n default-k8s-diff-port-125336: exit status 2 (311.300687ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-125336 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.85s)

                                                
                                    

Test pass (353/415)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.79
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.3/json-events 3.27
14 TestDownloadOnly/v1.34.3/cached-images 0.44
15 TestDownloadOnly/v1.34.3/binaries 0
17 TestDownloadOnly/v1.34.3/LogsDuration 0.07
18 TestDownloadOnly/v1.34.3/DeleteAll 0.21
19 TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.35.0-rc.1/json-events 3.27
22 TestDownloadOnly/v1.35.0-rc.1/preload-exists 0
26 TestDownloadOnly/v1.35.0-rc.1/LogsDuration 0.07
27 TestDownloadOnly/v1.35.0-rc.1/DeleteAll 0.21
28 TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 1.08
31 TestOffline 65.64
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 102.49
40 TestAddons/serial/GCPAuth/Namespaces 0.13
41 TestAddons/serial/GCPAuth/FakeCredentials 8.4
57 TestAddons/StoppedEnableDisable 16.75
58 TestCertOptions 32.38
59 TestCertExpiration 217.79
61 TestForceSystemdFlag 30.14
62 TestForceSystemdEnv 28.06
67 TestErrorSpam/setup 28.06
68 TestErrorSpam/start 0.62
69 TestErrorSpam/status 0.91
70 TestErrorSpam/pause 5.94
71 TestErrorSpam/unpause 4.9
72 TestErrorSpam/stop 8.06
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 45.98
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 7.05
79 TestFunctional/serial/KubeContext 0.04
80 TestFunctional/serial/KubectlGetPods 0.07
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.5
84 TestFunctional/serial/CacheCmd/cache/add_local 1.22
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.47
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.11
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
92 TestFunctional/serial/ExtraConfig 67.53
93 TestFunctional/serial/ComponentHealth 0.06
94 TestFunctional/serial/LogsCmd 1.21
95 TestFunctional/serial/LogsFileCmd 1.23
96 TestFunctional/serial/InvalidService 3.83
98 TestFunctional/parallel/ConfigCmd 0.43
99 TestFunctional/parallel/DashboardCmd 26.07
100 TestFunctional/parallel/DryRun 0.46
101 TestFunctional/parallel/InternationalLanguage 0.16
102 TestFunctional/parallel/StatusCmd 0.96
106 TestFunctional/parallel/ServiceCmdConnect 9.5
107 TestFunctional/parallel/AddonsCmd 0.16
108 TestFunctional/parallel/PersistentVolumeClaim 20.86
110 TestFunctional/parallel/SSHCmd 0.61
111 TestFunctional/parallel/CpCmd 1.96
112 TestFunctional/parallel/MySQL 23.84
113 TestFunctional/parallel/FileSync 0.27
114 TestFunctional/parallel/CertSync 1.77
118 TestFunctional/parallel/NodeLabels 0.06
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.52
122 TestFunctional/parallel/License 0.46
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.49
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.54
126 TestFunctional/parallel/ProfileCmd/profile_list 0.46
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
128 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
130 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.21
131 TestFunctional/parallel/ServiceCmd/DeployApp 10.13
132 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
133 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
137 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
138 TestFunctional/parallel/Version/short 0.09
139 TestFunctional/parallel/Version/components 0.62
140 TestFunctional/parallel/ImageCommands/ImageListShort 1.64
141 TestFunctional/parallel/ImageCommands/ImageListTable 1.42
143 TestFunctional/parallel/ImageCommands/ImageListYaml 1.47
144 TestFunctional/parallel/ImageCommands/ImageBuild 5.29
145 TestFunctional/parallel/ImageCommands/Setup 0.99
146 TestFunctional/parallel/MountCmd/any-port 5.58
147 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.12
148 TestFunctional/parallel/ServiceCmd/List 0.52
149 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
150 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.84
151 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
152 TestFunctional/parallel/ServiceCmd/Format 0.34
153 TestFunctional/parallel/ServiceCmd/URL 0.36
154 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.25
155 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.35
156 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
157 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
158 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
159 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
160 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.63
161 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.44
162 TestFunctional/parallel/MountCmd/specific-port 1.98
163 TestFunctional/parallel/MountCmd/VerifyCleanup 1.44
164 TestFunctional/delete_echo-server_images 0.03
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy 39.83
172 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog 0
173 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart 6.93
174 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext 0.04
175 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods 0.05
178 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote 2.51
179 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local 1.17
180 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list 0.07
182 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node 0.28
183 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload 1.46
184 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete 0.12
185 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd 0.12
186 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly 0.11
187 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig 86.91
188 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth 0.07
189 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd 1.16
190 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd 1.17
191 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService 4.27
193 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd 0.49
194 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd 6.18
195 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun 0.42
196 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage 0.22
197 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd 0.98
201 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect 9.53
202 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd 0.18
203 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim 22.84
205 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd 0.68
206 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd 1.76
207 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL 23.89
208 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync 0.29
209 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync 1.67
213 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels 0.07
215 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled 0.63
217 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License 0.46
219 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel 0.47
220 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel 0
222 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup 8.21
223 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp 8.15
224 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
225 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect 0
229 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel 0.11
230 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create 0.39
231 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list 0.38
232 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output 0.38
233 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List 0.54
234 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port 7.12
235 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput 0.51
236 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS 0.39
237 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format 0.41
238 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL 0.39
239 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes 0.19
240 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster 0.19
241 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters 0.18
242 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short 0.14
243 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components 0.54
244 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort 0.24
245 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable 0.23
246 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson 0.25
247 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml 0.24
248 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild 1.91
249 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup 0.42
250 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon 1.55
251 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon 1
252 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port 1.87
253 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon 1.3
254 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup 2.04
255 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile 0.35
256 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove 0.58
258 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon 3.86
259 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images 0.03
260 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image 0.02
261 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images 0.01
265 TestMultiControlPlane/serial/StartCluster 133.35
266 TestMultiControlPlane/serial/DeployApp 5.09
267 TestMultiControlPlane/serial/PingHostFromPods 1
268 TestMultiControlPlane/serial/AddWorkerNode 24.05
269 TestMultiControlPlane/serial/NodeLabels 0.06
270 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.84
271 TestMultiControlPlane/serial/CopyFile 16.57
272 TestMultiControlPlane/serial/StopSecondaryNode 19.73
273 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.7
274 TestMultiControlPlane/serial/RestartSecondaryNode 8.88
275 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.84
276 TestMultiControlPlane/serial/RestartClusterKeepsNodes 125.05
277 TestMultiControlPlane/serial/DeleteSecondaryNode 10.46
278 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.66
279 TestMultiControlPlane/serial/StopCluster 49.97
280 TestMultiControlPlane/serial/RestartCluster 56.43
281 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.66
282 TestMultiControlPlane/serial/AddSecondaryNode 79.93
283 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.85
288 TestJSONOutput/start/Command 43.9
289 TestJSONOutput/start/Audit 0
291 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
292 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
295 TestJSONOutput/pause/Audit 0
297 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
298 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
301 TestJSONOutput/unpause/Audit 0
303 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
304 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
306 TestJSONOutput/stop/Command 7.95
307 TestJSONOutput/stop/Audit 0
309 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
310 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
311 TestErrorJSONOutput 0.22
313 TestKicCustomNetwork/create_custom_network 29.99
314 TestKicCustomNetwork/use_default_bridge_network 25.92
315 TestKicExistingNetwork 25.76
316 TestKicCustomSubnet 27.89
317 TestKicStaticIP 26.6
318 TestMainNoArgs 0.06
319 TestMinikubeProfile 62.42
322 TestMountStart/serial/StartWithMountFirst 7.54
323 TestMountStart/serial/VerifyMountFirst 0.26
324 TestMountStart/serial/StartWithMountSecond 7.48
325 TestMountStart/serial/VerifyMountSecond 0.26
326 TestMountStart/serial/DeleteFirst 1.65
327 TestMountStart/serial/VerifyMountPostDelete 0.26
328 TestMountStart/serial/Stop 1.26
329 TestMountStart/serial/RestartStopped 7.05
330 TestMountStart/serial/VerifyMountPostStop 0.26
333 TestMultiNode/serial/FreshStart2Nodes 99.37
334 TestMultiNode/serial/DeployApp2Nodes 3.22
335 TestMultiNode/serial/PingHostFrom2Pods 0.7
336 TestMultiNode/serial/AddNode 54.42
337 TestMultiNode/serial/MultiNodeLabels 0.06
338 TestMultiNode/serial/ProfileList 0.63
339 TestMultiNode/serial/CopyFile 9.45
340 TestMultiNode/serial/StopNode 2.22
341 TestMultiNode/serial/StartAfterStop 7.3
342 TestMultiNode/serial/RestartKeepsNodes 73.88
343 TestMultiNode/serial/DeleteNode 5.04
344 TestMultiNode/serial/StopMultiNode 30.77
345 TestMultiNode/serial/RestartMultiNode 51.35
346 TestMultiNode/serial/ValidateNameConflict 28.93
351 TestPreload 99.78
353 TestScheduledStopUnix 104.38
356 TestInsufficientStorage 7.88
357 TestRunningBinaryUpgrade 324.95
359 TestKubernetesUpgrade 307.03
360 TestMissingContainerUpgrade 101.84
362 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
363 TestNoKubernetes/serial/StartWithK8s 41.08
364 TestNoKubernetes/serial/StartWithStopK8s 8.84
365 TestNoKubernetes/serial/Start 3.77
366 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
367 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
368 TestNoKubernetes/serial/ProfileList 16.3
369 TestNoKubernetes/serial/Stop 2.18
370 TestNoKubernetes/serial/StartNoArgs 6.26
371 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
372 TestStoppedBinaryUpgrade/Setup 0.6
373 TestStoppedBinaryUpgrade/Upgrade 284.18
381 TestNetworkPlugins/group/false 3.32
393 TestPause/serial/Start 45.2
394 TestStoppedBinaryUpgrade/MinikubeLogs 1.01
395 TestNetworkPlugins/group/auto/Start 54.66
396 TestPause/serial/SecondStartNoReconfiguration 8.74
397 TestNetworkPlugins/group/kindnet/Start 53.05
398 TestNetworkPlugins/group/calico/Start 60.96
400 TestNetworkPlugins/group/custom-flannel/Start 64.51
401 TestNetworkPlugins/group/auto/KubeletFlags 0.33
402 TestNetworkPlugins/group/auto/NetCatPod 9.24
403 TestNetworkPlugins/group/auto/DNS 0.13
404 TestNetworkPlugins/group/auto/Localhost 0.12
405 TestNetworkPlugins/group/auto/HairPin 0.11
406 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
407 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
408 TestNetworkPlugins/group/kindnet/NetCatPod 9.21
409 TestNetworkPlugins/group/calico/ControllerPod 6.01
410 TestNetworkPlugins/group/enable-default-cni/Start 71.49
411 TestNetworkPlugins/group/kindnet/DNS 0.11
412 TestNetworkPlugins/group/kindnet/Localhost 0.09
413 TestNetworkPlugins/group/kindnet/HairPin 0.08
414 TestNetworkPlugins/group/calico/KubeletFlags 0.29
415 TestNetworkPlugins/group/calico/NetCatPod 8.22
416 TestNetworkPlugins/group/calico/DNS 0.13
417 TestNetworkPlugins/group/calico/Localhost 0.09
418 TestNetworkPlugins/group/calico/HairPin 0.14
419 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
420 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.23
421 TestNetworkPlugins/group/custom-flannel/DNS 0.13
422 TestNetworkPlugins/group/custom-flannel/Localhost 0.09
423 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
424 TestNetworkPlugins/group/flannel/Start 55.49
425 TestNetworkPlugins/group/bridge/Start 78.46
427 TestStartStop/group/old-k8s-version/serial/FirstStart 50.56
428 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
429 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.18
430 TestNetworkPlugins/group/flannel/ControllerPod 6.01
431 TestNetworkPlugins/group/enable-default-cni/DNS 0.11
432 TestNetworkPlugins/group/enable-default-cni/Localhost 0.09
433 TestNetworkPlugins/group/enable-default-cni/HairPin 0.09
434 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
435 TestNetworkPlugins/group/flannel/NetCatPod 7.17
436 TestNetworkPlugins/group/flannel/DNS 0.12
437 TestNetworkPlugins/group/flannel/Localhost 0.1
438 TestNetworkPlugins/group/flannel/HairPin 0.09
439 TestStartStop/group/old-k8s-version/serial/DeployApp 8.25
441 TestStartStop/group/no-preload/serial/FirstStart 46.41
443 TestStartStop/group/old-k8s-version/serial/Stop 16.45
444 TestNetworkPlugins/group/bridge/KubeletFlags 0.35
445 TestNetworkPlugins/group/bridge/NetCatPod 10.7
447 TestStartStop/group/embed-certs/serial/FirstStart 50.52
448 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
449 TestStartStop/group/old-k8s-version/serial/SecondStart 46.53
450 TestNetworkPlugins/group/bridge/DNS 0.13
451 TestNetworkPlugins/group/bridge/Localhost 0.11
452 TestNetworkPlugins/group/bridge/HairPin 0.11
454 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 53.25
455 TestStartStop/group/no-preload/serial/DeployApp 8.24
457 TestStartStop/group/no-preload/serial/Stop 18.39
458 TestStartStop/group/embed-certs/serial/DeployApp 6.24
459 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
461 TestStartStop/group/embed-certs/serial/Stop 16.48
462 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
463 TestStartStop/group/no-preload/serial/SecondStart 48.93
464 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
465 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.32
468 TestStartStop/group/newest-cni/serial/FirstStart 23.88
469 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
470 TestStartStop/group/embed-certs/serial/SecondStart 50.79
471 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.28
473 TestStartStop/group/default-k8s-diff-port/serial/Stop 18.23
474 TestStartStop/group/newest-cni/serial/DeployApp 0
476 TestStartStop/group/newest-cni/serial/Stop 2.5
477 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
478 TestStartStop/group/newest-cni/serial/SecondStart 10.95
479 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
480 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
481 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 49.56
482 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
483 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
484 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.72
485 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
487 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.73
489 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
490 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.06
491 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.63
493 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
494 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.06
495 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.67
x
+
TestDownloadOnly/v1.28.0/json-events (4.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-967603 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-967603 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.788660803s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1210 05:28:35.239067    9253 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1210 05:28:35.239177    9253 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22094-5725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-967603
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-967603: exit status 85 (70.085974ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-967603 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-967603 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:28:30
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:28:30.500646    9265 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:28:30.500731    9265 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:28:30.500738    9265 out.go:374] Setting ErrFile to fd 2...
	I1210 05:28:30.500742    9265 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:28:30.500895    9265 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	W1210 05:28:30.500999    9265 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22094-5725/.minikube/config/config.json: open /home/jenkins/minikube-integration/22094-5725/.minikube/config/config.json: no such file or directory
	I1210 05:28:30.501449    9265 out.go:368] Setting JSON to true
	I1210 05:28:30.502289    9265 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":654,"bootTime":1765343856,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:28:30.502347    9265 start.go:143] virtualization: kvm guest
	I1210 05:28:30.506387    9265 out.go:99] [download-only-967603] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 05:28:30.506507    9265 notify.go:221] Checking for updates...
	W1210 05:28:30.506518    9265 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22094-5725/.minikube/cache/preloaded-tarball: no such file or directory
	I1210 05:28:30.507810    9265 out.go:171] MINIKUBE_LOCATION=22094
	I1210 05:28:30.509036    9265 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:28:30.510160    9265 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 05:28:30.511136    9265 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube
	I1210 05:28:30.512177    9265 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1210 05:28:30.514180    9265 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1210 05:28:30.514382    9265 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:28:30.536536    9265 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 05:28:30.536608    9265 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:28:30.756798    9265 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-10 05:28:30.747793202 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 05:28:30.756895    9265 docker.go:319] overlay module found
	I1210 05:28:30.758415    9265 out.go:99] Using the docker driver based on user configuration
	I1210 05:28:30.758436    9265 start.go:309] selected driver: docker
	I1210 05:28:30.758442    9265 start.go:927] validating driver "docker" against <nil>
	I1210 05:28:30.758523    9265 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:28:30.811028    9265 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-10 05:28:30.802007763 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 05:28:30.811237    9265 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 05:28:30.811712    9265 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1210 05:28:30.811855    9265 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 05:28:30.813431    9265 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-967603 host does not exist
	  To start a cluster, run: "minikube start -p download-only-967603"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-967603
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/json-events (3.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-307099 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-307099 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.265933279s)
--- PASS: TestDownloadOnly/v1.34.3/json-events (3.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/cached-images (0.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/cached-images
I1210 05:28:39.049508    9253 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
I1210 05:28:39.204283    9253 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
I1210 05:28:39.359229    9253 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
--- PASS: TestDownloadOnly/v1.34.3/cached-images (0.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/binaries
--- PASS: TestDownloadOnly/v1.34.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-307099
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-307099: exit status 85 (69.261096ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-967603 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-967603 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ delete  │ -p download-only-967603                                                                                                                                                   │ download-only-967603 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ start   │ -o=json --download-only -p download-only-307099 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-307099 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:28:35
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:28:35.703839    9623 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:28:35.703932    9623 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:28:35.703940    9623 out.go:374] Setting ErrFile to fd 2...
	I1210 05:28:35.703944    9623 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:28:35.704146    9623 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 05:28:35.704544    9623 out.go:368] Setting JSON to true
	I1210 05:28:35.705297    9623 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":660,"bootTime":1765343856,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:28:35.705343    9623 start.go:143] virtualization: kvm guest
	I1210 05:28:35.706953    9623 out.go:99] [download-only-307099] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 05:28:35.707129    9623 notify.go:221] Checking for updates...
	I1210 05:28:35.708294    9623 out.go:171] MINIKUBE_LOCATION=22094
	I1210 05:28:35.709696    9623 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:28:35.710720    9623 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 05:28:35.711719    9623 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube
	I1210 05:28:35.712711    9623 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1210 05:28:35.714602    9623 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1210 05:28:35.714814    9623 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:28:35.737281    9623 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 05:28:35.737382    9623 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:28:35.793375    9623 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-10 05:28:35.784055789 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 05:28:35.793512    9623 docker.go:319] overlay module found
	I1210 05:28:35.794914    9623 out.go:99] Using the docker driver based on user configuration
	I1210 05:28:35.794937    9623 start.go:309] selected driver: docker
	I1210 05:28:35.794944    9623 start.go:927] validating driver "docker" against <nil>
	I1210 05:28:35.795033    9623 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:28:35.845847    9623 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-10 05:28:35.83755555 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 05:28:35.845994    9623 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 05:28:35.846490    9623 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1210 05:28:35.846617    9623 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 05:28:35.848156    9623 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-307099 host does not exist
	  To start a cluster, run: "minikube start -p download-only-307099"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.3/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-307099
--- PASS: TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/json-events (3.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-967320 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-967320 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.266757034s)
--- PASS: TestDownloadOnly/v1.35.0-rc.1/json-events (3.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/preload-exists
I1210 05:28:43.161123    9253 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
I1210 05:28:43.161167    9253 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22094-5725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-967320
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-967320: exit status 85 (69.29275ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-967603 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio      │ download-only-967603 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ delete  │ -p download-only-967603                                                                                                                                                        │ download-only-967603 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ start   │ -o=json --download-only -p download-only-307099 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio      │ download-only-307099 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ delete  │ -p download-only-307099                                                                                                                                                        │ download-only-307099 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ start   │ -o=json --download-only -p download-only-967320 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-967320 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:28:39
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:28:39.943553   10036 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:28:39.943766   10036 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:28:39.943774   10036 out.go:374] Setting ErrFile to fd 2...
	I1210 05:28:39.943778   10036 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:28:39.943932   10036 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 05:28:39.944359   10036 out.go:368] Setting JSON to true
	I1210 05:28:39.945074   10036 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":664,"bootTime":1765343856,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:28:39.945134   10036 start.go:143] virtualization: kvm guest
	I1210 05:28:39.946765   10036 out.go:99] [download-only-967320] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 05:28:39.946888   10036 notify.go:221] Checking for updates...
	I1210 05:28:39.948051   10036 out.go:171] MINIKUBE_LOCATION=22094
	I1210 05:28:39.949408   10036 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:28:39.950485   10036 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 05:28:39.951522   10036 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube
	I1210 05:28:39.952591   10036 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1210 05:28:39.954555   10036 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1210 05:28:39.954770   10036 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:28:39.977800   10036 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 05:28:39.977865   10036 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:28:40.032938   10036 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-10 05:28:40.022998215 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 05:28:40.033040   10036 docker.go:319] overlay module found
	I1210 05:28:40.034413   10036 out.go:99] Using the docker driver based on user configuration
	I1210 05:28:40.034438   10036 start.go:309] selected driver: docker
	I1210 05:28:40.034443   10036 start.go:927] validating driver "docker" against <nil>
	I1210 05:28:40.034508   10036 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:28:40.091994   10036 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-10 05:28:40.083153142 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 05:28:40.092196   10036 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 05:28:40.092711   10036 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1210 05:28:40.092864   10036 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 05:28:40.094367   10036 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-967320 host does not exist
	  To start a cluster, run: "minikube start -p download-only-967320"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-967320
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (1.08s)

                                                
                                                
=== RUN   TestBinaryMirror
I1210 05:28:44.864370    9253 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-899655 --alsologtostderr --binary-mirror http://127.0.0.1:36067 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-899655" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-899655
--- PASS: TestBinaryMirror (1.08s)

                                                
                                    
x
+
TestOffline (65.64s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-353357 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-353357 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m3.288763183s)
helpers_test.go:176: Cleaning up "offline-crio-353357" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-353357
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-353357: (2.348517653s)
--- PASS: TestOffline (65.64s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-193927
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-193927: exit status 85 (60.99309ms)

                                                
                                                
-- stdout --
	* Profile "addons-193927" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-193927"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-193927
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-193927: exit status 85 (61.362492ms)

                                                
                                                
-- stdout --
	* Profile "addons-193927" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-193927"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (102.49s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-193927 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-193927 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (1m42.490693405s)
--- PASS: TestAddons/Setup (102.49s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-193927 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-193927 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.4s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-193927 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-193927 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [a64bd81b-5c5c-497a-80f3-8d129505228d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [a64bd81b-5c5c-497a-80f3-8d129505228d] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.0028597s
addons_test.go:696: (dbg) Run:  kubectl --context addons-193927 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-193927 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-193927 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.40s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.75s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-193927
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-193927: (16.484283973s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-193927
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-193927
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-193927
--- PASS: TestAddons/StoppedEnableDisable (16.75s)

                                                
                                    
x
+
TestCertOptions (32.38s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-357277 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-357277 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (29.406639732s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-357277 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-357277 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-357277 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-357277" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-357277
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-357277: (2.345234491s)
--- PASS: TestCertOptions (32.38s)

                                                
                                    
x
+
TestCertExpiration (217.79s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-790790 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-790790 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (27.217628225s)
E1210 06:08:32.326915    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-790790 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-790790 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (7.297302909s)
helpers_test.go:176: Cleaning up "cert-expiration-790790" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-790790
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-790790: (3.278485627s)
--- PASS: TestCertExpiration (217.79s)

                                                
                                    
x
+
TestForceSystemdFlag (30.14s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-644043 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-644043 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (27.500967099s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-644043 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-644043" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-644043
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-644043: (2.34796652s)
--- PASS: TestForceSystemdFlag (30.14s)

                                                
                                    
x
+
TestForceSystemdEnv (28.06s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-872487 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1210 06:06:42.445671    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-604071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-872487 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.765094886s)
helpers_test.go:176: Cleaning up "force-systemd-env-872487" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-872487
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-872487: (2.298501097s)
--- PASS: TestForceSystemdEnv (28.06s)

                                                
                                    
x
+
TestErrorSpam/setup (28.06s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-471075 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-471075 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-471075 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-471075 --driver=docker  --container-runtime=crio: (28.055778722s)
--- PASS: TestErrorSpam/setup (28.06s)

                                                
                                    
x
+
TestErrorSpam/start (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-471075 --log_dir /tmp/nospam-471075 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-471075 --log_dir /tmp/nospam-471075 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-471075 --log_dir /tmp/nospam-471075 start --dry-run
--- PASS: TestErrorSpam/start (0.62s)

                                                
                                    
x
+
TestErrorSpam/status (0.91s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-471075 --log_dir /tmp/nospam-471075 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-471075 --log_dir /tmp/nospam-471075 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-471075 --log_dir /tmp/nospam-471075 status
--- PASS: TestErrorSpam/status (0.91s)

                                                
                                    
x
+
TestErrorSpam/pause (5.94s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-471075 --log_dir /tmp/nospam-471075 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-471075 --log_dir /tmp/nospam-471075 pause: exit status 80 (2.246637205s)

                                                
                                                
-- stdout --
	* Pausing node nospam-471075 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:34:08Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-471075 --log_dir /tmp/nospam-471075 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-471075 --log_dir /tmp/nospam-471075 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-471075 --log_dir /tmp/nospam-471075 pause: exit status 80 (2.125073172s)

                                                
                                                
-- stdout --
	* Pausing node nospam-471075 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:34:10Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-471075 --log_dir /tmp/nospam-471075 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-471075 --log_dir /tmp/nospam-471075 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-471075 --log_dir /tmp/nospam-471075 pause: exit status 80 (1.569551048s)

                                                
                                                
-- stdout --
	* Pausing node nospam-471075 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:34:12Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-471075 --log_dir /tmp/nospam-471075 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.94s)

                                                
                                    
x
+
TestErrorSpam/unpause (4.9s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-471075 --log_dir /tmp/nospam-471075 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-471075 --log_dir /tmp/nospam-471075 unpause: exit status 80 (1.571125202s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-471075 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:34:13Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-471075 --log_dir /tmp/nospam-471075 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-471075 --log_dir /tmp/nospam-471075 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-471075 --log_dir /tmp/nospam-471075 unpause: exit status 80 (1.629703032s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-471075 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:34:15Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-471075 --log_dir /tmp/nospam-471075 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-471075 --log_dir /tmp/nospam-471075 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-471075 --log_dir /tmp/nospam-471075 unpause: exit status 80 (1.696566383s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-471075 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:34:16Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-471075 --log_dir /tmp/nospam-471075 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (4.90s)

                                                
                                    
x
+
TestErrorSpam/stop (8.06s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-471075 --log_dir /tmp/nospam-471075 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-471075 --log_dir /tmp/nospam-471075 stop: (7.86678046s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-471075 --log_dir /tmp/nospam-471075 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-471075 --log_dir /tmp/nospam-471075 stop
--- PASS: TestErrorSpam/stop (8.06s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/test/nested/copy/9253/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (45.98s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-604071 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-604071 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (45.983524964s)
--- PASS: TestFunctional/serial/StartWithProxy (45.98s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.05s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1210 05:35:15.036021    9253 config.go:182] Loaded profile config "functional-604071": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-604071 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-604071 --alsologtostderr -v=8: (7.048119789s)
functional_test.go:678: soft start took 7.048813975s for "functional-604071" cluster.
I1210 05:35:22.084509    9253 config.go:182] Loaded profile config "functional-604071": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/SoftStart (7.05s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-604071 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-604071 /tmp/TestFunctionalserialCacheCmdcacheadd_local3822631622/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 cache add minikube-local-cache-test:functional-604071
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 cache delete minikube-local-cache-test:functional-604071
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-604071
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-604071 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (267.686959ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 kubectl -- --context functional-604071 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-604071 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (67.53s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-604071 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1210 05:35:29.262241    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:35:29.268605    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:35:29.279933    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:35:29.301305    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:35:29.342658    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:35:29.424047    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:35:29.585559    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:35:29.907300    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:35:30.549625    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:35:31.830984    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:35:34.392813    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:35:39.514808    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:35:49.756271    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:36:10.238237    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-604071 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m7.527851149s)
functional_test.go:776: restart took 1m7.52799764s for "functional-604071" cluster.
I1210 05:36:35.654439    9253 config.go:182] Loaded profile config "functional-604071": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/ExtraConfig (67.53s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-604071 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-604071 logs: (1.212477419s)
--- PASS: TestFunctional/serial/LogsCmd (1.21s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 logs --file /tmp/TestFunctionalserialLogsFileCmd4092586747/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-604071 logs --file /tmp/TestFunctionalserialLogsFileCmd4092586747/001/logs.txt: (1.225501827s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.23s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.83s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-604071 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-604071
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-604071: exit status 115 (330.024393ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30370 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-604071 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.83s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-604071 config get cpus: exit status 14 (72.235514ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-604071 config get cpus: exit status 14 (78.557346ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (26.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-604071 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-604071 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 50889: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (26.07s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-604071 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-604071 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (191.910327ms)

                                                
                                                
-- stdout --
	* [functional-604071] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:36:58.536351   50253 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:36:58.536572   50253 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:36:58.536581   50253 out.go:374] Setting ErrFile to fd 2...
	I1210 05:36:58.536585   50253 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:36:58.536775   50253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 05:36:58.537196   50253 out.go:368] Setting JSON to false
	I1210 05:36:58.538104   50253 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1162,"bootTime":1765343856,"procs":250,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:36:58.538157   50253 start.go:143] virtualization: kvm guest
	I1210 05:36:58.539754   50253 out.go:179] * [functional-604071] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 05:36:58.540937   50253 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 05:36:58.540941   50253 notify.go:221] Checking for updates...
	I1210 05:36:58.543134   50253 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:36:58.544414   50253 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 05:36:58.545639   50253 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube
	I1210 05:36:58.550637   50253 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 05:36:58.552253   50253 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:36:58.553960   50253 config.go:182] Loaded profile config "functional-604071": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:36:58.554534   50253 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:36:58.579075   50253 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 05:36:58.579164   50253 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:36:58.651575   50253 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-10 05:36:58.637419379 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 05:36:58.651768   50253 docker.go:319] overlay module found
	I1210 05:36:58.653791   50253 out.go:179] * Using the docker driver based on existing profile
	I1210 05:36:58.655185   50253 start.go:309] selected driver: docker
	I1210 05:36:58.655203   50253 start.go:927] validating driver "docker" against &{Name:functional-604071 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-604071 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:36:58.655333   50253 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:36:58.660223   50253 out.go:203] 
	W1210 05:36:58.665281   50253 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1210 05:36:58.666474   50253 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-604071 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-604071 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-604071 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (160.509468ms)

                                                
                                                
-- stdout --
	* [functional-604071] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:36:52.898011   46605 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:36:52.898103   46605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:36:52.898111   46605 out.go:374] Setting ErrFile to fd 2...
	I1210 05:36:52.898115   46605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:36:52.898411   46605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 05:36:52.898803   46605 out.go:368] Setting JSON to false
	I1210 05:36:52.899653   46605 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1157,"bootTime":1765343856,"procs":246,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:36:52.899701   46605 start.go:143] virtualization: kvm guest
	I1210 05:36:52.901589   46605 out.go:179] * [functional-604071] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1210 05:36:52.902742   46605 notify.go:221] Checking for updates...
	I1210 05:36:52.902753   46605 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 05:36:52.903970   46605 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:36:52.905441   46605 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 05:36:52.906703   46605 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube
	I1210 05:36:52.907722   46605 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 05:36:52.908934   46605 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:36:52.910424   46605 config.go:182] Loaded profile config "functional-604071": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:36:52.910926   46605 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:36:52.934209   46605 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 05:36:52.934286   46605 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:36:52.987994   46605 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:62 SystemTime:2025-12-10 05:36:52.977390711 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 05:36:52.988125   46605 docker.go:319] overlay module found
	I1210 05:36:52.990520   46605 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1210 05:36:52.991716   46605 start.go:309] selected driver: docker
	I1210 05:36:52.991732   46605 start.go:927] validating driver "docker" against &{Name:functional-604071 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-604071 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:36:52.991836   46605 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:36:52.993409   46605 out.go:203] 
	W1210 05:36:52.994679   46605 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1210 05:36:52.996363   46605 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-604071 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-604071 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-gvqjs" [bc56576e-37a8-4205-8fd8-adb30c557113] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-gvqjs" [bc56576e-37a8-4205-8fd8-adb30c557113] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.003319518s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30961
functional_test.go:1680: http://192.168.49.2:30961: success! body:
Request served by hello-node-connect-7d85dfc575-gvqjs

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30961
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.50s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (20.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [402abd14-8a82-48b8-a869-ea9f06290a07] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00275959s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-604071 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-604071 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-604071 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-604071 apply -f testdata/storage-provisioner/pod.yaml
I1210 05:36:48.774897    9253 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [18945b0c-71c1-4aa6-a00e-04d5268a4253] Pending
helpers_test.go:353: "sp-pod" [18945b0c-71c1-4aa6-a00e-04d5268a4253] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [18945b0c-71c1-4aa6-a00e-04d5268a4253] Running
E1210 05:36:51.200364    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003548034s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-604071 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-604071 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-604071 delete -f testdata/storage-provisioner/pod.yaml: (1.159281319s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-604071 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [c2b77766-2184-4235-b598-69c02724d04e] Pending
helpers_test.go:353: "sp-pod" [c2b77766-2184-4235-b598-69c02724d04e] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004684718s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-604071 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (20.86s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 ssh -n functional-604071 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 cp functional-604071:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2560693228/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 ssh -n functional-604071 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 ssh -n functional-604071 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-604071 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-dr4b6" [db720d7d-b4b4-49d8-be34-9f7014085c9f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-dr4b6" [db720d7d-b4b4-49d8-be34-9f7014085c9f] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 16.004478487s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-604071 exec mysql-6bcdcbc558-dr4b6 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-604071 exec mysql-6bcdcbc558-dr4b6 -- mysql -ppassword -e "show databases;": exit status 1 (85.488947ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 05:37:15.186514    9253 retry.go:31] will retry after 1.189413866s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-604071 exec mysql-6bcdcbc558-dr4b6 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-604071 exec mysql-6bcdcbc558-dr4b6 -- mysql -ppassword -e "show databases;": exit status 1 (105.780346ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 05:37:16.482753    9253 retry.go:31] will retry after 2.207096075s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-604071 exec mysql-6bcdcbc558-dr4b6 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-604071 exec mysql-6bcdcbc558-dr4b6 -- mysql -ppassword -e "show databases;": exit status 1 (82.683883ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 05:37:18.774016    9253 retry.go:31] will retry after 1.704113819s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-604071 exec mysql-6bcdcbc558-dr4b6 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-604071 exec mysql-6bcdcbc558-dr4b6 -- mysql -ppassword -e "show databases;": exit status 1 (81.572593ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 05:37:20.560548    9253 retry.go:31] will retry after 2.129924865s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-604071 exec mysql-6bcdcbc558-dr4b6 -- mysql -ppassword -e "show databases;"
2025/12/10 05:37:24 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MySQL (23.84s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/9253/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 ssh "sudo cat /etc/test/nested/copy/9253/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/9253.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 ssh "sudo cat /etc/ssl/certs/9253.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/9253.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 ssh "sudo cat /usr/share/ca-certificates/9253.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/92532.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 ssh "sudo cat /etc/ssl/certs/92532.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/92532.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 ssh "sudo cat /usr/share/ca-certificates/92532.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-604071 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-604071 ssh "sudo systemctl is-active docker": exit status 1 (261.675602ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-604071 ssh "sudo systemctl is-active containerd": exit status 1 (263.016894ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-604071 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-604071 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-604071 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-604071 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 45369: os: process already finished
helpers_test.go:526: unable to kill pid 45053: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "388.535921ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "75.267531ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "360.610648ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "61.082085ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-604071 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-604071 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [6291fc17-fbbd-4f2d-b232-fd14110591c4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [6291fc17-fbbd-4f2d-b232-fd14110591c4] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.002926299s
I1210 05:36:52.165171    9253 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-604071 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-604071 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-b4rdt" [bbb2fbb7-d918-415a-9536-cfba513f05b0] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-b4rdt" [bbb2fbb7-d918-415a-9536-cfba513f05b0] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.003884915s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-604071 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.204.86 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-604071 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-604071 image ls --format short --alsologtostderr: (1.639235987s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-604071 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.3
registry.k8s.io/kube-proxy:v1.34.3
registry.k8s.io/kube-controller-manager:v1.34.3
registry.k8s.io/kube-apiserver:v1.34.3
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-604071
localhost/kicbase/echo-server:functional-604071
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-604071 image ls --format short --alsologtostderr:
I1210 05:37:04.283472   53016 out.go:360] Setting OutFile to fd 1 ...
I1210 05:37:04.283579   53016 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:37:04.283590   53016 out.go:374] Setting ErrFile to fd 2...
I1210 05:37:04.283597   53016 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:37:04.283872   53016 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
I1210 05:37:04.284663   53016 config.go:182] Loaded profile config "functional-604071": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1210 05:37:04.284785   53016 config.go:182] Loaded profile config "functional-604071": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1210 05:37:04.285392   53016 cli_runner.go:164] Run: docker container inspect functional-604071 --format={{.State.Status}}
I1210 05:37:04.310559   53016 ssh_runner.go:195] Run: systemctl --version
I1210 05:37:04.310621   53016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-604071
I1210 05:37:04.334982   53016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/functional-604071/id_rsa Username:docker}
I1210 05:37:04.441591   53016 ssh_runner.go:195] Run: sudo crictl images --output json
I1210 05:37:05.722620   53016 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.280991986s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 image ls --format table --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-604071 image ls --format table --alsologtostderr: (1.418245398s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-604071 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-proxy              │ v1.34.3            │ 36eef8e07bdd6 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 740kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.3            │ aec12dadf56dd │ 53.9MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ localhost/minikube-local-cache-test     │ functional-604071  │ 019369cc87699 │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-604071  │ 9056ab77afb8e │ 4.95MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ public.ecr.aws/nginx/nginx              │ alpine             │ d4918ca78576a │ 54.2MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.3            │ aa27095f56193 │ 89MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.3            │ 5826b25d990d7 │ 76MB   │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-604071 image ls --format table --alsologtostderr:
I1210 05:37:08.181007   53542 out.go:360] Setting OutFile to fd 1 ...
I1210 05:37:08.181286   53542 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:37:08.181298   53542 out.go:374] Setting ErrFile to fd 2...
I1210 05:37:08.181304   53542 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:37:08.181631   53542 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
I1210 05:37:08.182471   53542 config.go:182] Loaded profile config "functional-604071": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1210 05:37:08.182594   53542 config.go:182] Loaded profile config "functional-604071": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1210 05:37:08.183189   53542 cli_runner.go:164] Run: docker container inspect functional-604071 --format={{.State.Status}}
I1210 05:37:08.203481   53542 ssh_runner.go:195] Run: systemctl --version
I1210 05:37:08.203528   53542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-604071
I1210 05:37:08.224022   53542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/functional-604071/id_rsa Username:docker}
I1210 05:37:08.326860   53542 ssh_runner.go:195] Run: sudo crictl images --output json
I1210 05:37:09.523155   53542 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.196258847s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 image ls --format yaml --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-604071 image ls --format yaml --alsologtostderr: (1.472686245s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-604071 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:6cdf015a972b346dc904e4d8ee30fcff66495a96deb56b6c1000aa064eb71fa5
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.3
size: "76001424"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:a8ad62a46c568df922febd0986d02f88bfe5e1a8f5e8dd0bd02a0cafffba019b
repoTags:
- registry.k8s.io/pause:3.10.1
size: "739536"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-604071
size: "4945146"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31468661"
- id: 019369cc87699fd72257b66fb79d9769d245f90b8ba22dc4056aefde3b945b1b
repoDigests:
- localhost/minikube-local-cache-test@sha256:cbfa33c38c8da3a025331495ce3dc07a31aad76d412dca8bb30ad5927414547b
repoTags:
- localhost/minikube-local-cache-test:functional-604071
size: "3330"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:0e5c08f69a52d288f6d181c08d0142bb74acb7cf330257e57f835cf60d898a31
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76100234"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:09c404d47c88be54eaaf0af6edaecdc1a417bcf04522ffeaf62c4dc0ed5a6d10
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63582165"
- id: 36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691
repoDigests:
- registry.k8s.io/kube-proxy@sha256:07ea3bc8c077aa2dea58d292bdb37e38198b1de3e5a5fc7d62359906a54be721
repoTags:
- registry.k8s.io/kube-proxy:v1.34.3
size: "73143588"
- id: aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:d0377cec3c4eba230c281923387f4be168b48824185c60fb02783df5ada3126e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.3
size: "53850254"
- id: aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:18b3c745b7314e398516d8a850fe6b88f066f41f6fbd5132705145abc7da8fea
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.3
size: "89047338"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9
- public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "54242145"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-604071 image ls --format yaml --alsologtostderr:
I1210 05:37:04.443674   53112 out.go:360] Setting OutFile to fd 1 ...
I1210 05:37:04.443781   53112 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:37:04.443791   53112 out.go:374] Setting ErrFile to fd 2...
I1210 05:37:04.443797   53112 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:37:04.444070   53112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
I1210 05:37:04.444837   53112 config.go:182] Loaded profile config "functional-604071": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1210 05:37:04.444970   53112 config.go:182] Loaded profile config "functional-604071": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1210 05:37:04.445552   53112 cli_runner.go:164] Run: docker container inspect functional-604071 --format={{.State.Status}}
I1210 05:37:04.468850   53112 ssh_runner.go:195] Run: systemctl --version
I1210 05:37:04.468916   53112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-604071
I1210 05:37:04.490892   53112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/functional-604071/id_rsa Username:docker}
I1210 05:37:04.594026   53112 ssh_runner.go:195] Run: sudo crictl images --output json
I1210 05:37:05.724130   53112 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.129780941s)
W1210 05:37:05.776854   53112 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 8d0b5ed1-3a4c-44a4-b876-ade086acf347
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-604071 ssh pgrep buildkitd: exit status 1 (307.303721ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 image build -t localhost/my-image:functional-604071 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-604071 image build -t localhost/my-image:functional-604071 testdata/build --alsologtostderr: (4.752221893s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-604071 image build -t localhost/my-image:functional-604071 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 10857fa26d2
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-604071
--> c7e73362e96
Successfully tagged localhost/my-image:functional-604071
c7e73362e960952caab749e6509d752c3efa1ee440b0ece0033b5c9b5484aa84
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-604071 image build -t localhost/my-image:functional-604071 testdata/build --alsologtostderr:
I1210 05:37:06.219035   53458 out.go:360] Setting OutFile to fd 1 ...
I1210 05:37:06.219663   53458 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:37:06.219675   53458 out.go:374] Setting ErrFile to fd 2...
I1210 05:37:06.219683   53458 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:37:06.219961   53458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
I1210 05:37:06.220784   53458 config.go:182] Loaded profile config "functional-604071": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1210 05:37:06.221506   53458 config.go:182] Loaded profile config "functional-604071": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1210 05:37:06.222101   53458 cli_runner.go:164] Run: docker container inspect functional-604071 --format={{.State.Status}}
I1210 05:37:06.244626   53458 ssh_runner.go:195] Run: systemctl --version
I1210 05:37:06.244700   53458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-604071
I1210 05:37:06.265397   53458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/functional-604071/id_rsa Username:docker}
I1210 05:37:06.371265   53458 build_images.go:162] Building image from path: /tmp/build.1034592625.tar
I1210 05:37:06.371328   53458 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1210 05:37:06.381686   53458 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1034592625.tar
I1210 05:37:06.386147   53458 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1034592625.tar: stat -c "%s %y" /var/lib/minikube/build/build.1034592625.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1034592625.tar': No such file or directory
I1210 05:37:06.386180   53458 ssh_runner.go:362] scp /tmp/build.1034592625.tar --> /var/lib/minikube/build/build.1034592625.tar (3072 bytes)
I1210 05:37:06.408903   53458 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1034592625
I1210 05:37:06.418710   53458 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1034592625 -xf /var/lib/minikube/build/build.1034592625.tar
I1210 05:37:06.428508   53458 crio.go:315] Building image: /var/lib/minikube/build/build.1034592625
I1210 05:37:06.428579   53458 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-604071 /var/lib/minikube/build/build.1034592625 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1210 05:37:10.882436   53458 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-604071 /var/lib/minikube/build/build.1034592625 --cgroup-manager=cgroupfs: (4.453811538s)
I1210 05:37:10.882502   53458 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1034592625
I1210 05:37:10.891559   53458 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1034592625.tar
I1210 05:37:10.899440   53458 build_images.go:218] Built localhost/my-image:functional-604071 from /tmp/build.1034592625.tar
I1210 05:37:10.899475   53458 build_images.go:134] succeeded building to: functional-604071
I1210 05:37:10.899482   53458 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-604071
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-604071 /tmp/TestFunctionalparallelMountCmdany-port3947339800/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765345013003495086" to /tmp/TestFunctionalparallelMountCmdany-port3947339800/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765345013003495086" to /tmp/TestFunctionalparallelMountCmdany-port3947339800/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765345013003495086" to /tmp/TestFunctionalparallelMountCmdany-port3947339800/001/test-1765345013003495086
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-604071 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (272.348665ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 05:36:53.276226    9253 retry.go:31] will retry after 288.106852ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 10 05:36 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 10 05:36 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 10 05:36 test-1765345013003495086
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 ssh cat /mount-9p/test-1765345013003495086
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-604071 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [cce5921e-9f12-4e7a-b138-117db27e59e8] Pending
helpers_test.go:353: "busybox-mount" [cce5921e-9f12-4e7a-b138-117db27e59e8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [cce5921e-9f12-4e7a-b138-117db27e59e8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [cce5921e-9f12-4e7a-b138-117db27e59e8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.003509746s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-604071 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-604071 /tmp/TestFunctionalparallelMountCmdany-port3947339800/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 image load --daemon kicbase/echo-server:functional-604071 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 service list -o json
functional_test.go:1504: Took "511.534848ms" to run "out/minikube-linux-amd64 -p functional-604071 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 image load --daemon kicbase/echo-server:functional-604071 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30747
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30747
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-604071
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 image load --daemon kicbase/echo-server:functional-604071 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 image save kicbase/echo-server:functional-604071 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 image rm kicbase/echo-server:functional-604071 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 image ls
I1210 05:36:57.174885    9253 detect.go:223] nested VM detected
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-604071
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 image save --daemon kicbase/echo-server:functional-604071 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-604071
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-604071 /tmp/TestFunctionalparallelMountCmdspecific-port168924363/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-604071 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (322.46229ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 05:36:58.905971    9253 retry.go:31] will retry after 646.947839ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-604071 /tmp/TestFunctionalparallelMountCmdspecific-port168924363/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-604071 ssh "sudo umount -f /mount-9p": exit status 1 (256.383146ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-604071 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-604071 /tmp/TestFunctionalparallelMountCmdspecific-port168924363/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-604071 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1201717493/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-604071 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1201717493/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-604071 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1201717493/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-604071 ssh "findmnt -T" /mount1: exit status 1 (316.582615ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 05:37:00.885050    9253 retry.go:31] will retry after 295.270252ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-604071 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-604071 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-604071 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1201717493/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-604071 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1201717493/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-604071 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1201717493/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.44s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-604071
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-604071
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-604071
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22094-5725/.minikube/files/etc/test/nested/copy/9253/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (39.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-589967 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-589967 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (39.828932672s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (39.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (6.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart
I1210 05:38:07.570974    9253 config.go:182] Loaded profile config "functional-589967": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-589967 --alsologtostderr -v=8
E1210 05:38:13.122186    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-589967 --alsologtostderr -v=8: (6.929150467s)
functional_test.go:678: soft start took 6.929473365s for "functional-589967" cluster.
I1210 05:38:14.500444    9253 config.go:182] Loaded profile config "functional-589967": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (6.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-589967 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (2.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (2.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-589967 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialCacheC263916502/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 cache add minikube-local-cache-test:functional-589967
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 cache delete minikube-local-cache-test:functional-589967
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-589967
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-589967 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (272.765487ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 kubectl -- --context functional-589967 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-589967 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (86.91s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-589967 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-589967 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m26.910171541s)
functional_test.go:776: restart took 1m26.910313505s for "functional-589967" cluster.
I1210 05:39:47.394382    9253 config.go:182] Loaded profile config "functional-589967": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (86.91s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-589967 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (1.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-589967 logs: (1.163253904s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (1.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi1691344695/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-589967 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi1691344695/001/logs.txt: (1.167170354s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (4.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-589967 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-589967
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-589967: exit status 115 (327.332554ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31731 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-589967 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (4.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-589967 config get cpus: exit status 14 (102.080299ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-589967 config get cpus: exit status 14 (78.885801ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (6.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-589967 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-589967 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 68435: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (6.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-589967 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-589967 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 23 (183.907406ms)

                                                
                                                
-- stdout --
	* [functional-589967] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:40:05.704745   67423 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:40:05.705030   67423 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:40:05.705039   67423 out.go:374] Setting ErrFile to fd 2...
	I1210 05:40:05.705045   67423 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:40:05.705333   67423 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 05:40:05.705858   67423 out.go:368] Setting JSON to false
	I1210 05:40:05.707075   67423 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1350,"bootTime":1765343856,"procs":251,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:40:05.707141   67423 start.go:143] virtualization: kvm guest
	I1210 05:40:05.709953   67423 out.go:179] * [functional-589967] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 05:40:05.711095   67423 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 05:40:05.711098   67423 notify.go:221] Checking for updates...
	I1210 05:40:05.713249   67423 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:40:05.714782   67423 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 05:40:05.716616   67423 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube
	I1210 05:40:05.717780   67423 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 05:40:05.718840   67423 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:40:05.721232   67423 config.go:182] Loaded profile config "functional-589967": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 05:40:05.721807   67423 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:40:05.748980   67423 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 05:40:05.749146   67423 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:40:05.818909   67423 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-12-10 05:40:05.808838681 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 05:40:05.819009   67423 docker.go:319] overlay module found
	I1210 05:40:05.820422   67423 out.go:179] * Using the docker driver based on existing profile
	I1210 05:40:05.821394   67423 start.go:309] selected driver: docker
	I1210 05:40:05.821412   67423 start.go:927] validating driver "docker" against &{Name:functional-589967 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-589967 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:40:05.821527   67423 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:40:05.823403   67423 out.go:203] 
	W1210 05:40:05.824484   67423 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1210 05:40:05.825515   67423 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-589967 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-589967 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-589967 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 23 (217.284421ms)

                                                
                                                
-- stdout --
	* [functional-589967] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:40:05.504548   67168 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:40:05.504839   67168 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:40:05.504853   67168 out.go:374] Setting ErrFile to fd 2...
	I1210 05:40:05.504859   67168 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:40:05.505329   67168 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 05:40:05.505848   67168 out.go:368] Setting JSON to false
	I1210 05:40:05.507060   67168 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1349,"bootTime":1765343856,"procs":250,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:40:05.507146   67168 start.go:143] virtualization: kvm guest
	I1210 05:40:05.531820   67168 out.go:179] * [functional-589967] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1210 05:40:05.533244   67168 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 05:40:05.533272   67168 notify.go:221] Checking for updates...
	I1210 05:40:05.535349   67168 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:40:05.536484   67168 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 05:40:05.537643   67168 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube
	I1210 05:40:05.538928   67168 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 05:40:05.540072   67168 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:40:05.544755   67168 config.go:182] Loaded profile config "functional-589967": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 05:40:05.545543   67168 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:40:05.573895   67168 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 05:40:05.574043   67168 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:40:05.635705   67168 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-10 05:40:05.624933753 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 05:40:05.635803   67168 docker.go:319] overlay module found
	I1210 05:40:05.637275   67168 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1210 05:40:05.638336   67168 start.go:309] selected driver: docker
	I1210 05:40:05.638348   67168 start.go:927] validating driver "docker" against &{Name:functional-589967 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-589967 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:40:05.638434   67168 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:40:05.639927   67168 out.go:203] 
	W1210 05:40:05.640865   67168 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1210 05:40:05.641938   67168 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (0.98s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (0.98s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (9.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-589967 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-589967 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-2c9gt" [fef54c6b-bff5-4696-b918-aeeb0eb49756] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-9f67c86d4-2c9gt" [fef54c6b-bff5-4696-b918-aeeb0eb49756] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.004096378s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31266
functional_test.go:1680: http://192.168.49.2:31266: success! body:
Request served by hello-node-connect-9f67c86d4-2c9gt

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31266
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (9.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (22.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [5b44f621-8371-4804-9d92-e5bc5b32f5c6] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.002817834s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-589967 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-589967 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-589967 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-589967 apply -f testdata/storage-provisioner/pod.yaml
I1210 05:40:00.882401    9253 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [c0666a43-5f54-4746-a253-3e0044047e98] Pending
helpers_test.go:353: "sp-pod" [c0666a43-5f54-4746-a253-3e0044047e98] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [c0666a43-5f54-4746-a253-3e0044047e98] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003594875s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-589967 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-589967 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-589967 delete -f testdata/storage-provisioner/pod.yaml: (1.150772924s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-589967 apply -f testdata/storage-provisioner/pod.yaml
I1210 05:40:09.274416    9253 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [956d6647-2a1a-422d-99d3-aa200d187e23] Pending
helpers_test.go:353: "sp-pod" [956d6647-2a1a-422d-99d3-aa200d187e23] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [956d6647-2a1a-422d-99d3-aa200d187e23] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003105097s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-589967 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (22.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (1.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 ssh -n functional-589967 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 cp functional-589967:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm1511919564/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 ssh -n functional-589967 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 ssh -n functional-589967 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (1.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (23.89s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-589967 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-klbpg" [ac6c8e90-6ddb-4c51-842f-943556c88e4f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-klbpg" [ac6c8e90-6ddb-4c51-842f-943556c88e4f] Running
E1210 05:40:29.262704    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL: app=mysql healthy within 18.003813192s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-589967 exec mysql-7d7b65bc95-klbpg -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-589967 exec mysql-7d7b65bc95-klbpg -- mysql -ppassword -e "show databases;": exit status 1 (88.10064ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 05:40:30.861647    9253 retry.go:31] will retry after 1.300019221s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-589967 exec mysql-7d7b65bc95-klbpg -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-589967 exec mysql-7d7b65bc95-klbpg -- mysql -ppassword -e "show databases;": exit status 1 (81.905751ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 05:40:32.245170    9253 retry.go:31] will retry after 1.299438313s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-589967 exec mysql-7d7b65bc95-klbpg -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-589967 exec mysql-7d7b65bc95-klbpg -- mysql -ppassword -e "show databases;": exit status 1 (86.303871ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 05:40:33.632115    9253 retry.go:31] will retry after 2.769712609s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-589967 exec mysql-7d7b65bc95-klbpg -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (23.89s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/9253/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 ssh "sudo cat /etc/test/nested/copy/9253/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/9253.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 ssh "sudo cat /etc/ssl/certs/9253.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/9253.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 ssh "sudo cat /usr/share/ca-certificates/9253.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/92532.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 ssh "sudo cat /etc/ssl/certs/92532.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/92532.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 ssh "sudo cat /usr/share/ca-certificates/92532.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-589967 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-589967 ssh "sudo systemctl is-active docker": exit status 1 (315.288199ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-589967 ssh "sudo systemctl is-active containerd": exit status 1 (317.208826ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-589967 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-589967 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-589967 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 64322: os: process already finished
helpers_test.go:520: unable to terminate pid 63927: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-589967 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-589967 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup (8.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-589967 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [d0982d17-3be6-476b-b42f-a5f405a9d62d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [d0982d17-3be6-476b-b42f-a5f405a9d62d] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003369376s
I1210 05:40:02.743726    9253 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup (8.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (8.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-589967 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-589967 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-762nn" [fdff30b6-186e-44fc-b44b-21040d884cb7] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-5758569b79-762nn" [fdff30b6-186e-44fc-b44b-21040d884cb7] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003220174s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (8.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-589967 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.192.204 is working!
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-589967 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "319.677215ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "57.905318ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "324.058277ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "59.767339ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (7.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-589967 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun770204903/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765345204058472202" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun770204903/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765345204058472202" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun770204903/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765345204058472202" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun770204903/001/test-1765345204058472202
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-589967 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (310.771282ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 05:40:04.369575    9253 retry.go:31] will retry after 692.896918ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 10 05:40 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 10 05:40 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 10 05:40 test-1765345204058472202
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 ssh cat /mount-9p/test-1765345204058472202
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-589967 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [dd31f505-5d9a-4c2a-b7f9-b56e12952429] Pending
helpers_test.go:353: "busybox-mount" [dd31f505-5d9a-4c2a-b7f9-b56e12952429] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [dd31f505-5d9a-4c2a-b7f9-b56e12952429] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [dd31f505-5d9a-4c2a-b7f9-b56e12952429] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003528532s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-589967 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-589967 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun770204903/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (7.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 service list -o json
functional_test.go:1504: Took "509.776052ms" to run "out/minikube-linux-amd64 -p functional-589967 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30717
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30717
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-589967 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-rc.1
registry.k8s.io/kube-proxy:v1.35.0-rc.1
registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
registry.k8s.io/kube-apiserver:v1.35.0-rc.1
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-589967
localhost/kicbase/echo-server:functional-589967
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-589967 image ls --format short --alsologtostderr:
I1210 05:40:20.961093   72513 out.go:360] Setting OutFile to fd 1 ...
I1210 05:40:20.961220   72513 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:40:20.961228   72513 out.go:374] Setting ErrFile to fd 2...
I1210 05:40:20.961234   72513 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:40:20.961498   72513 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
I1210 05:40:20.962206   72513 config.go:182] Loaded profile config "functional-589967": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1210 05:40:20.962335   72513 config.go:182] Loaded profile config "functional-589967": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1210 05:40:20.962885   72513 cli_runner.go:164] Run: docker container inspect functional-589967 --format={{.State.Status}}
I1210 05:40:20.982669   72513 ssh_runner.go:195] Run: systemctl --version
I1210 05:40:20.982720   72513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-589967
I1210 05:40:21.002775   72513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/functional-589967/id_rsa Username:docker}
I1210 05:40:21.096207   72513 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-589967 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-rc.1       │ af0321f3a4f38 │ 72MB   │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-589967  │ 9056ab77afb8e │ 4.95MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ public.ecr.aws/nginx/nginx              │ alpine             │ d4918ca78576a │ 54.2MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-rc.1       │ 58865405a13bc │ 90.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ localhost/minikube-local-cache-test     │ functional-589967  │ 019369cc87699 │ 3.33kB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-rc.1       │ 5032a56602e1b │ 76.9MB │
│ registry.k8s.io/etcd                    │ 3.6.6-0            │ 0a108f7189562 │ 63.6MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-rc.1       │ 73f80cdc073da │ 52.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ public.ecr.aws/docker/library/mysql     │ 8.4                │ 20d0be4ee4524 │ 804MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-589967 image ls --format table --alsologtostderr:
I1210 05:40:21.194692   72675 out.go:360] Setting OutFile to fd 1 ...
I1210 05:40:21.194825   72675 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:40:21.194838   72675 out.go:374] Setting ErrFile to fd 2...
I1210 05:40:21.194843   72675 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:40:21.195159   72675 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
I1210 05:40:21.195882   72675 config.go:182] Loaded profile config "functional-589967": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1210 05:40:21.196016   72675 config.go:182] Loaded profile config "functional-589967": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1210 05:40:21.196590   72675 cli_runner.go:164] Run: docker container inspect functional-589967 --format={{.State.Status}}
I1210 05:40:21.215178   72675 ssh_runner.go:195] Run: systemctl --version
I1210 05:40:21.215226   72675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-589967
I1210 05:40:21.233796   72675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/functional-589967/id_rsa Username:docker}
I1210 05:40:21.329978   72675 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-589967 image ls --format json --alsologtostderr:
[{"id":"0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2","repoDigests":["registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a","registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"63582405"},{"id":"5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98","registry.k8s.io/kube-controller-manager@sha256:94b94fef358192d13794f5acd21909a3eb0b3e960ed4286ef37a437e7f9272cd"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"],"size":"76893010"},{"id":"73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1e2bf4dfee764cc2eb3300c543b3ce1b00ca3ffc46b93f2b7ef326fbc2385636","registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0b
ba7449e45360e8220e670f417d3"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-rc.1"],"size":"52763474"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTag
s":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb
99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce","repoDigests":["registry.k8s.io/kube-apiserver@sha256:4527daf97bed5f1caff2267f9b84a6c626b82615d9ff7f933619321aebde536f","registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-rc.1"],"size":"90844140"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kic
base/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-589967"],"size":"4945146"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87
b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"019369cc87699fd72257b66fb79d9769d245f90b8ba22dc4056aefde3b945b1b","repoDigests":["localhost/minikube-local-cache-test@sha256:cbfa33c38c8da3a025331495ce3dc07a31aad76d412dca8bb30ad5927414547b"],"repoTags":["localhost/minikube-local-cache-test:functional-589967"],"size":"3330"},{"id":"af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a","repoDigests":["registry.k8s.io/kube-proxy@sha256:0efaa6b2a17dbaaac351bb0f55c1a495d297d87ac86b16965ec52e835c2b48d9","registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-rc.1"],"size":"71986585"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e
51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9","public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"54242145"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-589967 image ls --format json --alsologtostderr:
I1210 05:40:20.960897   72514 out.go:360] Setting OutFile to fd 1 ...
I1210 05:40:20.961218   72514 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:40:20.961227   72514 out.go:374] Setting ErrFile to fd 2...
I1210 05:40:20.961232   72514 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:40:20.961429   72514 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
I1210 05:40:20.962009   72514 config.go:182] Loaded profile config "functional-589967": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1210 05:40:20.962251   72514 config.go:182] Loaded profile config "functional-589967": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1210 05:40:20.962820   72514 cli_runner.go:164] Run: docker container inspect functional-589967 --format={{.State.Status}}
I1210 05:40:20.983324   72514 ssh_runner.go:195] Run: systemctl --version
I1210 05:40:20.983409   72514 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-589967
I1210 05:40:21.003623   72514 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/functional-589967/id_rsa Username:docker}
I1210 05:40:21.096225   72514 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-589967 image ls --format yaml --alsologtostderr:
- id: af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0efaa6b2a17dbaaac351bb0f55c1a495d297d87ac86b16965ec52e835c2b48d9
- registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-rc.1
size: "71986585"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-589967
size: "4945146"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9
- public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "54242145"
- id: 0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2
repoDigests:
- registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "63582405"
- id: 5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98
- registry.k8s.io/kube-controller-manager@sha256:94b94fef358192d13794f5acd21909a3eb0b3e960ed4286ef37a437e7f9272cd
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
size: "76893010"
- id: 73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1e2bf4dfee764cc2eb3300c543b3ce1b00ca3ffc46b93f2b7ef326fbc2385636
- registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-rc.1
size: "52763474"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 019369cc87699fd72257b66fb79d9769d245f90b8ba22dc4056aefde3b945b1b
repoDigests:
- localhost/minikube-local-cache-test@sha256:cbfa33c38c8da3a025331495ce3dc07a31aad76d412dca8bb30ad5927414547b
repoTags:
- localhost/minikube-local-cache-test:functional-589967
size: "3330"
- id: 58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:4527daf97bed5f1caff2267f9b84a6c626b82615d9ff7f933619321aebde536f
- registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-rc.1
size: "90844140"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-589967 image ls --format yaml --alsologtostderr:
I1210 05:40:20.961673   72515 out.go:360] Setting OutFile to fd 1 ...
I1210 05:40:20.962211   72515 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:40:20.962230   72515 out.go:374] Setting ErrFile to fd 2...
I1210 05:40:20.962237   72515 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:40:20.962681   72515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
I1210 05:40:20.963815   72515 config.go:182] Loaded profile config "functional-589967": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1210 05:40:20.963930   72515 config.go:182] Loaded profile config "functional-589967": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1210 05:40:20.964349   72515 cli_runner.go:164] Run: docker container inspect functional-589967 --format={{.State.Status}}
I1210 05:40:20.983524   72515 ssh_runner.go:195] Run: systemctl --version
I1210 05:40:20.983566   72515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-589967
I1210 05:40:21.002591   72515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/functional-589967/id_rsa Username:docker}
I1210 05:40:21.096207   72515 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (1.91s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-589967 ssh pgrep buildkitd: exit status 1 (264.428053ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 image build -t localhost/my-image:functional-589967 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-589967 image build -t localhost/my-image:functional-589967 testdata/build --alsologtostderr: (1.426244369s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-589967 image build -t localhost/my-image:functional-589967 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 307d98c5355
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-589967
--> ccc7a40d2ae
Successfully tagged localhost/my-image:functional-589967
ccc7a40d2ae0bb5931cc80fcaf9cc74506386f45442517804186b53073ec8838
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-589967 image build -t localhost/my-image:functional-589967 testdata/build --alsologtostderr:
I1210 05:40:21.455369   72834 out.go:360] Setting OutFile to fd 1 ...
I1210 05:40:21.455643   72834 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:40:21.455653   72834 out.go:374] Setting ErrFile to fd 2...
I1210 05:40:21.455660   72834 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:40:21.455837   72834 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
I1210 05:40:21.456404   72834 config.go:182] Loaded profile config "functional-589967": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1210 05:40:21.457028   72834 config.go:182] Loaded profile config "functional-589967": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1210 05:40:21.457469   72834 cli_runner.go:164] Run: docker container inspect functional-589967 --format={{.State.Status}}
I1210 05:40:21.474470   72834 ssh_runner.go:195] Run: systemctl --version
I1210 05:40:21.474538   72834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-589967
I1210 05:40:21.490327   72834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/functional-589967/id_rsa Username:docker}
I1210 05:40:21.582812   72834 build_images.go:162] Building image from path: /tmp/build.976134954.tar
I1210 05:40:21.582868   72834 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1210 05:40:21.589983   72834 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.976134954.tar
I1210 05:40:21.593272   72834 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.976134954.tar: stat -c "%s %y" /var/lib/minikube/build/build.976134954.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.976134954.tar': No such file or directory
I1210 05:40:21.593297   72834 ssh_runner.go:362] scp /tmp/build.976134954.tar --> /var/lib/minikube/build/build.976134954.tar (3072 bytes)
I1210 05:40:21.609509   72834 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.976134954
I1210 05:40:21.616470   72834 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.976134954 -xf /var/lib/minikube/build/build.976134954.tar
I1210 05:40:21.623668   72834 crio.go:315] Building image: /var/lib/minikube/build/build.976134954
I1210 05:40:21.623720   72834 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-589967 /var/lib/minikube/build/build.976134954 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1210 05:40:22.803990   72834 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-589967 /var/lib/minikube/build/build.976134954 --cgroup-manager=cgroupfs: (1.180244532s)
I1210 05:40:22.804044   72834 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.976134954
I1210 05:40:22.811904   72834 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.976134954.tar
I1210 05:40:22.819049   72834 build_images.go:218] Built localhost/my-image:functional-589967 from /tmp/build.976134954.tar
I1210 05:40:22.819087   72834 build_images.go:134] succeeded building to: functional-589967
I1210 05:40:22.819094   72834 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (1.91s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-589967
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 image load --daemon kicbase/echo-server:functional-589967 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-589967 image load --daemon kicbase/echo-server:functional-589967 --alsologtostderr: (1.320259437s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 image load --daemon kicbase/echo-server:functional-589967 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (1.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (1.87s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-589967 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2852179160/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-589967 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (333.640166ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 05:40:11.511336    9253 retry.go:31] will retry after 474.669588ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 ssh "findmnt -T /mount-9p | grep 9p"
2025/12/10 05:40:12 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-589967 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2852179160/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-589967 ssh "sudo umount -f /mount-9p": exit status 1 (287.188139ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-589967 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-589967 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2852179160/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (1.87s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-589967
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 image load --daemon kicbase/echo-server:functional-589967 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (2.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-589967 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1286470256/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-589967 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1286470256/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-589967 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1286470256/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-589967 ssh "findmnt -T" /mount1: exit status 1 (350.887716ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 05:40:13.403675    9253 retry.go:31] will retry after 497.187041ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-589967 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-589967 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1286470256/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-589967 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1286470256/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-589967 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1286470256/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (2.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 image save kicbase/echo-server:functional-589967 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 image rm kicbase/echo-server:functional-589967 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (3.86s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-589967
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-589967 image save --daemon kicbase/echo-server:functional-589967 --alsologtostderr
functional_test.go:439: (dbg) Done: out/minikube-linux-amd64 -p functional-589967 image save --daemon kicbase/echo-server:functional-589967 --alsologtostderr: (3.818069614s)
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-589967
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (3.86s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-589967
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-589967
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-589967
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (133.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1210 05:40:56.963756    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:41:42.445818    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-604071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:41:42.452167    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-604071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:41:42.463481    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-604071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:41:42.484799    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-604071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:41:42.526139    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-604071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:41:42.607559    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-604071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:41:42.769435    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-604071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:41:43.091130    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-604071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:41:43.732746    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-604071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:41:45.014004    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-604071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:41:47.575650    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-604071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:41:52.697169    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-604071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:42:02.938762    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-604071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:42:23.420765    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-604071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-087661 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m12.667974129s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (133.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-087661 kubectl -- rollout status deployment/busybox: (3.160746708s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 kubectl -- exec busybox-7b57f96db7-45m2d -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 kubectl -- exec busybox-7b57f96db7-4fjg8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 kubectl -- exec busybox-7b57f96db7-dqk57 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 kubectl -- exec busybox-7b57f96db7-45m2d -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 kubectl -- exec busybox-7b57f96db7-4fjg8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 kubectl -- exec busybox-7b57f96db7-dqk57 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 kubectl -- exec busybox-7b57f96db7-45m2d -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 kubectl -- exec busybox-7b57f96db7-4fjg8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 kubectl -- exec busybox-7b57f96db7-dqk57 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 kubectl -- exec busybox-7b57f96db7-45m2d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 kubectl -- exec busybox-7b57f96db7-45m2d -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 kubectl -- exec busybox-7b57f96db7-4fjg8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 kubectl -- exec busybox-7b57f96db7-4fjg8 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 kubectl -- exec busybox-7b57f96db7-dqk57 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 kubectl -- exec busybox-7b57f96db7-dqk57 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 node add --alsologtostderr -v 5
E1210 05:43:04.382867    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-604071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-087661 node add --alsologtostderr -v 5: (23.215675823s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-087661 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 cp testdata/cp-test.txt ha-087661:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 ssh -n ha-087661 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 cp ha-087661:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile458348527/001/cp-test_ha-087661.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 ssh -n ha-087661 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 cp ha-087661:/home/docker/cp-test.txt ha-087661-m02:/home/docker/cp-test_ha-087661_ha-087661-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 ssh -n ha-087661 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 ssh -n ha-087661-m02 "sudo cat /home/docker/cp-test_ha-087661_ha-087661-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 cp ha-087661:/home/docker/cp-test.txt ha-087661-m03:/home/docker/cp-test_ha-087661_ha-087661-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 ssh -n ha-087661 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 ssh -n ha-087661-m03 "sudo cat /home/docker/cp-test_ha-087661_ha-087661-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 cp ha-087661:/home/docker/cp-test.txt ha-087661-m04:/home/docker/cp-test_ha-087661_ha-087661-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 ssh -n ha-087661 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 ssh -n ha-087661-m04 "sudo cat /home/docker/cp-test_ha-087661_ha-087661-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 cp testdata/cp-test.txt ha-087661-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 ssh -n ha-087661-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 cp ha-087661-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile458348527/001/cp-test_ha-087661-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 ssh -n ha-087661-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 cp ha-087661-m02:/home/docker/cp-test.txt ha-087661:/home/docker/cp-test_ha-087661-m02_ha-087661.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 ssh -n ha-087661-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 ssh -n ha-087661 "sudo cat /home/docker/cp-test_ha-087661-m02_ha-087661.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 cp ha-087661-m02:/home/docker/cp-test.txt ha-087661-m03:/home/docker/cp-test_ha-087661-m02_ha-087661-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 ssh -n ha-087661-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 ssh -n ha-087661-m03 "sudo cat /home/docker/cp-test_ha-087661-m02_ha-087661-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 cp ha-087661-m02:/home/docker/cp-test.txt ha-087661-m04:/home/docker/cp-test_ha-087661-m02_ha-087661-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 ssh -n ha-087661-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 ssh -n ha-087661-m04 "sudo cat /home/docker/cp-test_ha-087661-m02_ha-087661-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 cp testdata/cp-test.txt ha-087661-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 ssh -n ha-087661-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 cp ha-087661-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile458348527/001/cp-test_ha-087661-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 ssh -n ha-087661-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 cp ha-087661-m03:/home/docker/cp-test.txt ha-087661:/home/docker/cp-test_ha-087661-m03_ha-087661.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 ssh -n ha-087661-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 ssh -n ha-087661 "sudo cat /home/docker/cp-test_ha-087661-m03_ha-087661.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 cp ha-087661-m03:/home/docker/cp-test.txt ha-087661-m02:/home/docker/cp-test_ha-087661-m03_ha-087661-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 ssh -n ha-087661-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 ssh -n ha-087661-m02 "sudo cat /home/docker/cp-test_ha-087661-m03_ha-087661-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 cp ha-087661-m03:/home/docker/cp-test.txt ha-087661-m04:/home/docker/cp-test_ha-087661-m03_ha-087661-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 ssh -n ha-087661-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 ssh -n ha-087661-m04 "sudo cat /home/docker/cp-test_ha-087661-m03_ha-087661-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 cp testdata/cp-test.txt ha-087661-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 ssh -n ha-087661-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 cp ha-087661-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile458348527/001/cp-test_ha-087661-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 ssh -n ha-087661-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 cp ha-087661-m04:/home/docker/cp-test.txt ha-087661:/home/docker/cp-test_ha-087661-m04_ha-087661.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 ssh -n ha-087661-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 ssh -n ha-087661 "sudo cat /home/docker/cp-test_ha-087661-m04_ha-087661.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 cp ha-087661-m04:/home/docker/cp-test.txt ha-087661-m02:/home/docker/cp-test_ha-087661-m04_ha-087661-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 ssh -n ha-087661-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 ssh -n ha-087661-m02 "sudo cat /home/docker/cp-test_ha-087661-m04_ha-087661-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 cp ha-087661-m04:/home/docker/cp-test.txt ha-087661-m03:/home/docker/cp-test_ha-087661-m04_ha-087661-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 ssh -n ha-087661-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 ssh -n ha-087661-m03 "sudo cat /home/docker/cp-test_ha-087661-m04_ha-087661-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (19.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-087661 node stop m02 --alsologtostderr -v 5: (19.063919467s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-087661 status --alsologtostderr -v 5: exit status 7 (664.130054ms)

                                                
                                                
-- stdout --
	ha-087661
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-087661-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-087661-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-087661-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:43:59.608906   96438 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:43:59.609015   96438 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:43:59.609028   96438 out.go:374] Setting ErrFile to fd 2...
	I1210 05:43:59.609035   96438 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:43:59.609279   96438 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 05:43:59.609450   96438 out.go:368] Setting JSON to false
	I1210 05:43:59.609473   96438 mustload.go:66] Loading cluster: ha-087661
	I1210 05:43:59.609597   96438 notify.go:221] Checking for updates...
	I1210 05:43:59.609825   96438 config.go:182] Loaded profile config "ha-087661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:43:59.609837   96438 status.go:174] checking status of ha-087661 ...
	I1210 05:43:59.611194   96438 cli_runner.go:164] Run: docker container inspect ha-087661 --format={{.State.Status}}
	I1210 05:43:59.630202   96438 status.go:371] ha-087661 host status = "Running" (err=<nil>)
	I1210 05:43:59.630245   96438 host.go:66] Checking if "ha-087661" exists ...
	I1210 05:43:59.630622   96438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-087661
	I1210 05:43:59.649267   96438 host.go:66] Checking if "ha-087661" exists ...
	I1210 05:43:59.649509   96438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 05:43:59.649545   96438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-087661
	I1210 05:43:59.666361   96438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/ha-087661/id_rsa Username:docker}
	I1210 05:43:59.759194   96438 ssh_runner.go:195] Run: systemctl --version
	I1210 05:43:59.765399   96438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 05:43:59.777679   96438 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:43:59.830392   96438 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-10 05:43:59.820460671 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 05:43:59.830892   96438 kubeconfig.go:125] found "ha-087661" server: "https://192.168.49.254:8443"
	I1210 05:43:59.830923   96438 api_server.go:166] Checking apiserver status ...
	I1210 05:43:59.830964   96438 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:43:59.841879   96438 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2318/cgroup
	W1210 05:43:59.849615   96438 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2318/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1210 05:43:59.849666   96438 ssh_runner.go:195] Run: ls
	I1210 05:43:59.853122   96438 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1210 05:43:59.857070   96438 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1210 05:43:59.857101   96438 status.go:463] ha-087661 apiserver status = Running (err=<nil>)
	I1210 05:43:59.857110   96438 status.go:176] ha-087661 status: &{Name:ha-087661 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 05:43:59.857137   96438 status.go:174] checking status of ha-087661-m02 ...
	I1210 05:43:59.857362   96438 cli_runner.go:164] Run: docker container inspect ha-087661-m02 --format={{.State.Status}}
	I1210 05:43:59.874928   96438 status.go:371] ha-087661-m02 host status = "Stopped" (err=<nil>)
	I1210 05:43:59.874946   96438 status.go:384] host is not running, skipping remaining checks
	I1210 05:43:59.874951   96438 status.go:176] ha-087661-m02 status: &{Name:ha-087661-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 05:43:59.874968   96438 status.go:174] checking status of ha-087661-m03 ...
	I1210 05:43:59.875241   96438 cli_runner.go:164] Run: docker container inspect ha-087661-m03 --format={{.State.Status}}
	I1210 05:43:59.893169   96438 status.go:371] ha-087661-m03 host status = "Running" (err=<nil>)
	I1210 05:43:59.893190   96438 host.go:66] Checking if "ha-087661-m03" exists ...
	I1210 05:43:59.893432   96438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-087661-m03
	I1210 05:43:59.911817   96438 host.go:66] Checking if "ha-087661-m03" exists ...
	I1210 05:43:59.912039   96438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 05:43:59.912070   96438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-087661-m03
	I1210 05:43:59.929924   96438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/ha-087661-m03/id_rsa Username:docker}
	I1210 05:44:00.021708   96438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 05:44:00.033626   96438 kubeconfig.go:125] found "ha-087661" server: "https://192.168.49.254:8443"
	I1210 05:44:00.033651   96438 api_server.go:166] Checking apiserver status ...
	I1210 05:44:00.033695   96438 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:44:00.043742   96438 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2225/cgroup
	W1210 05:44:00.051542   96438 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2225/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1210 05:44:00.051593   96438 ssh_runner.go:195] Run: ls
	I1210 05:44:00.054852   96438 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1210 05:44:00.058713   96438 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1210 05:44:00.058731   96438 status.go:463] ha-087661-m03 apiserver status = Running (err=<nil>)
	I1210 05:44:00.058738   96438 status.go:176] ha-087661-m03 status: &{Name:ha-087661-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 05:44:00.058750   96438 status.go:174] checking status of ha-087661-m04 ...
	I1210 05:44:00.058953   96438 cli_runner.go:164] Run: docker container inspect ha-087661-m04 --format={{.State.Status}}
	I1210 05:44:00.076229   96438 status.go:371] ha-087661-m04 host status = "Running" (err=<nil>)
	I1210 05:44:00.076245   96438 host.go:66] Checking if "ha-087661-m04" exists ...
	I1210 05:44:00.076461   96438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-087661-m04
	I1210 05:44:00.092838   96438 host.go:66] Checking if "ha-087661-m04" exists ...
	I1210 05:44:00.093058   96438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 05:44:00.093122   96438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-087661-m04
	I1210 05:44:00.109638   96438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/ha-087661-m04/id_rsa Username:docker}
	I1210 05:44:00.200770   96438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 05:44:00.212557   96438 status.go:176] ha-087661-m04 status: &{Name:ha-087661-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (19.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-087661 node start m02 --alsologtostderr -v 5: (7.991894751s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (125.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 stop --alsologtostderr -v 5
E1210 05:44:26.304368    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-604071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:44:54.536458    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-589967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:44:54.542863    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-589967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:44:54.554165    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-589967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:44:54.575544    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-589967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:44:54.616889    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-589967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:44:54.698317    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-589967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:44:54.859799    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-589967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:44:55.182020    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-589967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:44:55.823916    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-589967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:44:57.105491    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-589967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:44:59.667715    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-589967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:45:04.789419    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-589967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-087661 stop --alsologtostderr -v 5: (55.051399124s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 start --wait true --alsologtostderr -v 5
E1210 05:45:15.031342    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-589967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:45:29.263271    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:45:35.513297    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-589967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-087661 start --wait true --alsologtostderr -v 5: (1m9.871096879s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (125.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 node delete m03 --alsologtostderr -v 5
E1210 05:46:16.474591    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-589967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-087661 node delete m03 --alsologtostderr -v 5: (9.670144936s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (49.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 stop --alsologtostderr -v 5
E1210 05:46:42.445520    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-604071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:47:10.146638    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-604071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-087661 stop --alsologtostderr -v 5: (49.85125328s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-087661 status --alsologtostderr -v 5: exit status 7 (114.563346ms)

                                                
                                                
-- stdout --
	ha-087661
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-087661-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-087661-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:47:16.722598  111110 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:47:16.722830  111110 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:47:16.722840  111110 out.go:374] Setting ErrFile to fd 2...
	I1210 05:47:16.722844  111110 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:47:16.723054  111110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 05:47:16.723263  111110 out.go:368] Setting JSON to false
	I1210 05:47:16.723288  111110 mustload.go:66] Loading cluster: ha-087661
	I1210 05:47:16.723461  111110 notify.go:221] Checking for updates...
	I1210 05:47:16.723746  111110 config.go:182] Loaded profile config "ha-087661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:47:16.723763  111110 status.go:174] checking status of ha-087661 ...
	I1210 05:47:16.724193  111110 cli_runner.go:164] Run: docker container inspect ha-087661 --format={{.State.Status}}
	I1210 05:47:16.744612  111110 status.go:371] ha-087661 host status = "Stopped" (err=<nil>)
	I1210 05:47:16.744664  111110 status.go:384] host is not running, skipping remaining checks
	I1210 05:47:16.744677  111110 status.go:176] ha-087661 status: &{Name:ha-087661 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 05:47:16.744725  111110 status.go:174] checking status of ha-087661-m02 ...
	I1210 05:47:16.745073  111110 cli_runner.go:164] Run: docker container inspect ha-087661-m02 --format={{.State.Status}}
	I1210 05:47:16.763511  111110 status.go:371] ha-087661-m02 host status = "Stopped" (err=<nil>)
	I1210 05:47:16.763528  111110 status.go:384] host is not running, skipping remaining checks
	I1210 05:47:16.763533  111110 status.go:176] ha-087661-m02 status: &{Name:ha-087661-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 05:47:16.763548  111110 status.go:174] checking status of ha-087661-m04 ...
	I1210 05:47:16.763760  111110 cli_runner.go:164] Run: docker container inspect ha-087661-m04 --format={{.State.Status}}
	I1210 05:47:16.779340  111110 status.go:371] ha-087661-m04 host status = "Stopped" (err=<nil>)
	I1210 05:47:16.779358  111110 status.go:384] host is not running, skipping remaining checks
	I1210 05:47:16.779364  111110 status.go:176] ha-087661-m04 status: &{Name:ha-087661-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (49.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (56.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1210 05:47:38.396615    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-589967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-087661 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (55.666577534s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (56.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (79.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-087661 node add --control-plane --alsologtostderr -v 5: (1m19.089897456s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-087661 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (79.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (43.9s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-018272 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1210 05:49:54.536470    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-589967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:50:22.240345    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-589967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-018272 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (43.899027689s)
--- PASS: TestJSONOutput/start/Command (43.90s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.95s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-018272 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-018272 --output=json --user=testUser: (7.949633724s)
--- PASS: TestJSONOutput/stop/Command (7.95s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-294986 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-294986 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (72.901639ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2a13f59e-b146-427f-984a-f1ba7fdbdfa5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-294986] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"690bced5-51c3-49d0-8e99-9932bf733a4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22094"}}
	{"specversion":"1.0","id":"1a68f5b7-27ac-436b-833e-569970547e13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f1649ce5-6082-412d-a0de-04554713e305","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig"}}
	{"specversion":"1.0","id":"725bd855-ce42-44df-a261-ca95731a417b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube"}}
	{"specversion":"1.0","id":"e01fa48a-79cb-4a3f-929c-352543d3c636","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"fe397b00-ecd9-4bd9-ad0d-8bac3f43f4fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d42319eb-6ecb-43a9-a705-4b04be2dfe42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-294986" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-294986
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (29.99s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-226484 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-226484 --network=: (27.853677538s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-226484" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-226484
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-226484: (2.113077125s)
--- PASS: TestKicCustomNetwork/create_custom_network (29.99s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.92s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-301203 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-301203 --network=bridge: (23.899815443s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-301203" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-301203
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-301203: (2.003020971s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.92s)

                                                
                                    
x
+
TestKicExistingNetwork (25.76s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1210 05:51:38.470674    9253 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1210 05:51:38.487985    9253 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1210 05:51:38.488108    9253 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1210 05:51:38.488138    9253 cli_runner.go:164] Run: docker network inspect existing-network
W1210 05:51:38.503285    9253 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1210 05:51:38.503311    9253 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1210 05:51:38.503335    9253 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1210 05:51:38.503490    9253 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1210 05:51:38.519471    9253 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9ebf62c95cf7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d2:a8:ac:6e:16:1a} reservation:<nil>}
I1210 05:51:38.519860    9253 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0017e7100}
I1210 05:51:38.519895    9253 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1210 05:51:38.519956    9253 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1210 05:51:38.563706    9253 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-728807 --network=existing-network
E1210 05:51:42.447581    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-604071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:51:52.325166    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-728807 --network=existing-network: (23.655899s)
helpers_test.go:176: Cleaning up "existing-network-728807" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-728807
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-728807: (1.980069137s)
I1210 05:52:04.216065    9253 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.76s)

                                                
                                    
x
+
TestKicCustomSubnet (27.89s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-247646 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-247646 --subnet=192.168.60.0/24: (25.753865749s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-247646 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-247646" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-247646
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-247646: (2.116922743s)
--- PASS: TestKicCustomSubnet (27.89s)

                                                
                                    
x
+
TestKicStaticIP (26.6s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-490784 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-490784 --static-ip=192.168.200.200: (24.347594749s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-490784 ip
helpers_test.go:176: Cleaning up "static-ip-490784" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-490784
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-490784: (2.101960642s)
--- PASS: TestKicStaticIP (26.60s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (62.42s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-182698 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-182698 --driver=docker  --container-runtime=crio: (29.951164931s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-185015 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-185015 --driver=docker  --container-runtime=crio: (25.725912641s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-182698
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-185015
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-185015" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-185015
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p second-185015: (3.20617473s)
helpers_test.go:176: Cleaning up "first-182698" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-182698
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p first-182698: (2.359821684s)
--- PASS: TestMinikubeProfile (62.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.54s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-666570 --memory=3072 --mount-string /tmp/TestMountStartserial4068227579/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-666570 --memory=3072 --mount-string /tmp/TestMountStartserial4068227579/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.534761697s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-666570 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.48s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-682357 --memory=3072 --mount-string /tmp/TestMountStartserial4068227579/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-682357 --memory=3072 --mount-string /tmp/TestMountStartserial4068227579/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.4824134s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-682357 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-666570 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-666570 --alsologtostderr -v=5: (1.647752058s)
--- PASS: TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-682357 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-682357
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-682357: (1.259058314s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.05s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-682357
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-682357: (6.054548591s)
--- PASS: TestMountStart/serial/RestartStopped (7.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-682357 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (99.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-030786 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1210 05:54:54.535532    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-589967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:55:29.263213    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-030786 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m38.908235328s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (99.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030786 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030786 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-030786 -- rollout status deployment/busybox: (1.792952883s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030786 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030786 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030786 -- exec busybox-7b57f96db7-qkddk -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030786 -- exec busybox-7b57f96db7-qpsr6 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030786 -- exec busybox-7b57f96db7-qkddk -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030786 -- exec busybox-7b57f96db7-qpsr6 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030786 -- exec busybox-7b57f96db7-qkddk -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030786 -- exec busybox-7b57f96db7-qpsr6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.22s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030786 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030786 -- exec busybox-7b57f96db7-qkddk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030786 -- exec busybox-7b57f96db7-qkddk -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030786 -- exec busybox-7b57f96db7-qpsr6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030786 -- exec busybox-7b57f96db7-qpsr6 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (54.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-030786 -v=5 --alsologtostderr
E1210 05:56:42.446058    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-604071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-030786 -v=5 --alsologtostderr: (53.802602982s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (54.42s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-030786 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.63s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 cp testdata/cp-test.txt multinode-030786:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 ssh -n multinode-030786 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 cp multinode-030786:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2728751784/001/cp-test_multinode-030786.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 ssh -n multinode-030786 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 cp multinode-030786:/home/docker/cp-test.txt multinode-030786-m02:/home/docker/cp-test_multinode-030786_multinode-030786-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 ssh -n multinode-030786 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 ssh -n multinode-030786-m02 "sudo cat /home/docker/cp-test_multinode-030786_multinode-030786-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 cp multinode-030786:/home/docker/cp-test.txt multinode-030786-m03:/home/docker/cp-test_multinode-030786_multinode-030786-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 ssh -n multinode-030786 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 ssh -n multinode-030786-m03 "sudo cat /home/docker/cp-test_multinode-030786_multinode-030786-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 cp testdata/cp-test.txt multinode-030786-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 ssh -n multinode-030786-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 cp multinode-030786-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2728751784/001/cp-test_multinode-030786-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 ssh -n multinode-030786-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 cp multinode-030786-m02:/home/docker/cp-test.txt multinode-030786:/home/docker/cp-test_multinode-030786-m02_multinode-030786.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 ssh -n multinode-030786-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 ssh -n multinode-030786 "sudo cat /home/docker/cp-test_multinode-030786-m02_multinode-030786.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 cp multinode-030786-m02:/home/docker/cp-test.txt multinode-030786-m03:/home/docker/cp-test_multinode-030786-m02_multinode-030786-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 ssh -n multinode-030786-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 ssh -n multinode-030786-m03 "sudo cat /home/docker/cp-test_multinode-030786-m02_multinode-030786-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 cp testdata/cp-test.txt multinode-030786-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 ssh -n multinode-030786-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 cp multinode-030786-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2728751784/001/cp-test_multinode-030786-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 ssh -n multinode-030786-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 cp multinode-030786-m03:/home/docker/cp-test.txt multinode-030786:/home/docker/cp-test_multinode-030786-m03_multinode-030786.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 ssh -n multinode-030786-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 ssh -n multinode-030786 "sudo cat /home/docker/cp-test_multinode-030786-m03_multinode-030786.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 cp multinode-030786-m03:/home/docker/cp-test.txt multinode-030786-m02:/home/docker/cp-test_multinode-030786-m03_multinode-030786-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 ssh -n multinode-030786-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 ssh -n multinode-030786-m02 "sudo cat /home/docker/cp-test_multinode-030786-m03_multinode-030786-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.45s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-030786 node stop m03: (1.259554411s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-030786 status: exit status 7 (475.452788ms)

                                                
                                                
-- stdout --
	multinode-030786
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-030786-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-030786-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-030786 status --alsologtostderr: exit status 7 (480.630841ms)

                                                
                                                
-- stdout --
	multinode-030786
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-030786-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-030786-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:57:18.659585  179791 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:57:18.659718  179791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:57:18.659729  179791 out.go:374] Setting ErrFile to fd 2...
	I1210 05:57:18.659736  179791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:57:18.659961  179791 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 05:57:18.660163  179791 out.go:368] Setting JSON to false
	I1210 05:57:18.660190  179791 mustload.go:66] Loading cluster: multinode-030786
	I1210 05:57:18.660302  179791 notify.go:221] Checking for updates...
	I1210 05:57:18.660554  179791 config.go:182] Loaded profile config "multinode-030786": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:57:18.660569  179791 status.go:174] checking status of multinode-030786 ...
	I1210 05:57:18.661037  179791 cli_runner.go:164] Run: docker container inspect multinode-030786 --format={{.State.Status}}
	I1210 05:57:18.681706  179791 status.go:371] multinode-030786 host status = "Running" (err=<nil>)
	I1210 05:57:18.681731  179791 host.go:66] Checking if "multinode-030786" exists ...
	I1210 05:57:18.682006  179791 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-030786
	I1210 05:57:18.698901  179791 host.go:66] Checking if "multinode-030786" exists ...
	I1210 05:57:18.699136  179791 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 05:57:18.699185  179791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-030786
	I1210 05:57:18.716214  179791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/multinode-030786/id_rsa Username:docker}
	I1210 05:57:18.808912  179791 ssh_runner.go:195] Run: systemctl --version
	I1210 05:57:18.815030  179791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 05:57:18.826769  179791 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:57:18.882444  179791 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-10 05:57:18.872574201 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 05:57:18.882988  179791 kubeconfig.go:125] found "multinode-030786" server: "https://192.168.67.2:8443"
	I1210 05:57:18.883017  179791 api_server.go:166] Checking apiserver status ...
	I1210 05:57:18.883047  179791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:18.894176  179791 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2294/cgroup
	W1210 05:57:18.902194  179791 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2294/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1210 05:57:18.902240  179791 ssh_runner.go:195] Run: ls
	I1210 05:57:18.905728  179791 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1210 05:57:18.909745  179791 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1210 05:57:18.909761  179791 status.go:463] multinode-030786 apiserver status = Running (err=<nil>)
	I1210 05:57:18.909769  179791 status.go:176] multinode-030786 status: &{Name:multinode-030786 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 05:57:18.909783  179791 status.go:174] checking status of multinode-030786-m02 ...
	I1210 05:57:18.909986  179791 cli_runner.go:164] Run: docker container inspect multinode-030786-m02 --format={{.State.Status}}
	I1210 05:57:18.926272  179791 status.go:371] multinode-030786-m02 host status = "Running" (err=<nil>)
	I1210 05:57:18.926290  179791 host.go:66] Checking if "multinode-030786-m02" exists ...
	I1210 05:57:18.926554  179791 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-030786-m02
	I1210 05:57:18.943180  179791 host.go:66] Checking if "multinode-030786-m02" exists ...
	I1210 05:57:18.943453  179791 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 05:57:18.943494  179791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-030786-m02
	I1210 05:57:18.959938  179791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/22094-5725/.minikube/machines/multinode-030786-m02/id_rsa Username:docker}
	I1210 05:57:19.051660  179791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 05:57:19.063208  179791 status.go:176] multinode-030786-m02 status: &{Name:multinode-030786-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1210 05:57:19.063244  179791 status.go:174] checking status of multinode-030786-m03 ...
	I1210 05:57:19.063583  179791 cli_runner.go:164] Run: docker container inspect multinode-030786-m03 --format={{.State.Status}}
	I1210 05:57:19.080668  179791 status.go:371] multinode-030786-m03 host status = "Stopped" (err=<nil>)
	I1210 05:57:19.080683  179791 status.go:384] host is not running, skipping remaining checks
	I1210 05:57:19.080688  179791 status.go:176] multinode-030786-m03 status: &{Name:multinode-030786-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.22s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-030786 node start m03 -v=5 --alsologtostderr: (6.634368498s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.30s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (73.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-030786
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-030786
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-030786: (29.493155488s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-030786 --wait=true -v=5 --alsologtostderr
E1210 05:58:05.508687    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-604071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-030786 --wait=true -v=5 --alsologtostderr: (44.269069893s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-030786
--- PASS: TestMultiNode/serial/RestartKeepsNodes (73.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-030786 node delete m03: (4.474129145s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.04s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (30.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-030786 stop: (30.58588065s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-030786 status: exit status 7 (92.686668ms)

                                                
                                                
-- stdout --
	multinode-030786
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-030786-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-030786 status --alsologtostderr: exit status 7 (92.304165ms)

                                                
                                                
-- stdout --
	multinode-030786
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-030786-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:59:16.042116  189779 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:59:16.042203  189779 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:59:16.042215  189779 out.go:374] Setting ErrFile to fd 2...
	I1210 05:59:16.042221  189779 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:59:16.042432  189779 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 05:59:16.042626  189779 out.go:368] Setting JSON to false
	I1210 05:59:16.042659  189779 mustload.go:66] Loading cluster: multinode-030786
	I1210 05:59:16.042761  189779 notify.go:221] Checking for updates...
	I1210 05:59:16.043099  189779 config.go:182] Loaded profile config "multinode-030786": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:59:16.043115  189779 status.go:174] checking status of multinode-030786 ...
	I1210 05:59:16.043808  189779 cli_runner.go:164] Run: docker container inspect multinode-030786 --format={{.State.Status}}
	I1210 05:59:16.061140  189779 status.go:371] multinode-030786 host status = "Stopped" (err=<nil>)
	I1210 05:59:16.061165  189779 status.go:384] host is not running, skipping remaining checks
	I1210 05:59:16.061172  189779 status.go:176] multinode-030786 status: &{Name:multinode-030786 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 05:59:16.061210  189779 status.go:174] checking status of multinode-030786-m02 ...
	I1210 05:59:16.061448  189779 cli_runner.go:164] Run: docker container inspect multinode-030786-m02 --format={{.State.Status}}
	I1210 05:59:16.078971  189779 status.go:371] multinode-030786-m02 host status = "Stopped" (err=<nil>)
	I1210 05:59:16.078985  189779 status.go:384] host is not running, skipping remaining checks
	I1210 05:59:16.078991  189779 status.go:176] multinode-030786-m02 status: &{Name:multinode-030786-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (30.77s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-030786 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1210 05:59:54.536559    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-589967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-030786 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (50.774819788s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030786 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.35s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (28.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-030786
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-030786-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-030786-m02 --driver=docker  --container-runtime=crio: exit status 14 (71.601175ms)

                                                
                                                
-- stdout --
	* [multinode-030786-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-030786-m02' is duplicated with machine name 'multinode-030786-m02' in profile 'multinode-030786'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-030786-m03 --driver=docker  --container-runtime=crio
E1210 06:00:29.265243    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-030786-m03 --driver=docker  --container-runtime=crio: (26.248565358s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-030786
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-030786: exit status 80 (279.10331ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-030786 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-030786-m03 already exists in multinode-030786-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-030786-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-030786-m03: (2.268549026s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (28.93s)

                                                
                                    
x
+
TestPreload (99.78s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-269925 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
E1210 06:01:17.603048    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-589967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-269925 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (44.160899686s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-269925 image pull gcr.io/k8s-minikube/busybox
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-269925
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-269925: (7.949300832s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-269925 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1210 06:01:42.445553    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-604071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-269925 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (44.339624488s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-269925 image list
helpers_test.go:176: Cleaning up "test-preload-269925" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-269925
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-269925: (2.337605084s)
--- PASS: TestPreload (99.78s)

                                                
                                    
x
+
TestScheduledStopUnix (104.38s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-824753 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-824753 --memory=3072 --driver=docker  --container-runtime=crio: (27.394300371s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-824753 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1210 06:02:47.630824  209232 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:02:47.630919  209232 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:02:47.630927  209232 out.go:374] Setting ErrFile to fd 2...
	I1210 06:02:47.630931  209232 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:02:47.631140  209232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 06:02:47.631377  209232 out.go:368] Setting JSON to false
	I1210 06:02:47.631466  209232 mustload.go:66] Loading cluster: scheduled-stop-824753
	I1210 06:02:47.631764  209232 config.go:182] Loaded profile config "scheduled-stop-824753": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:02:47.631826  209232 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/scheduled-stop-824753/config.json ...
	I1210 06:02:47.631979  209232 mustload.go:66] Loading cluster: scheduled-stop-824753
	I1210 06:02:47.632070  209232 config.go:182] Loaded profile config "scheduled-stop-824753": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-824753 -n scheduled-stop-824753
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-824753 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1210 06:02:48.004333  209386 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:02:48.004578  209386 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:02:48.004589  209386 out.go:374] Setting ErrFile to fd 2...
	I1210 06:02:48.004593  209386 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:02:48.004778  209386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 06:02:48.005019  209386 out.go:368] Setting JSON to false
	I1210 06:02:48.005192  209386 daemonize_unix.go:73] killing process 209267 as it is an old scheduled stop
	I1210 06:02:48.005288  209386 mustload.go:66] Loading cluster: scheduled-stop-824753
	I1210 06:02:48.005613  209386 config.go:182] Loaded profile config "scheduled-stop-824753": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:02:48.005702  209386 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/scheduled-stop-824753/config.json ...
	I1210 06:02:48.005904  209386 mustload.go:66] Loading cluster: scheduled-stop-824753
	I1210 06:02:48.006037  209386 config.go:182] Loaded profile config "scheduled-stop-824753": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1210 06:02:48.011515    9253 retry.go:31] will retry after 103.526µs: open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/scheduled-stop-824753/pid: no such file or directory
I1210 06:02:48.012685    9253 retry.go:31] will retry after 101.437µs: open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/scheduled-stop-824753/pid: no such file or directory
I1210 06:02:48.013833    9253 retry.go:31] will retry after 222.781µs: open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/scheduled-stop-824753/pid: no such file or directory
I1210 06:02:48.014915    9253 retry.go:31] will retry after 446.357µs: open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/scheduled-stop-824753/pid: no such file or directory
I1210 06:02:48.016041    9253 retry.go:31] will retry after 588.963µs: open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/scheduled-stop-824753/pid: no such file or directory
I1210 06:02:48.017131    9253 retry.go:31] will retry after 644.472µs: open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/scheduled-stop-824753/pid: no such file or directory
I1210 06:02:48.018270    9253 retry.go:31] will retry after 1.57283ms: open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/scheduled-stop-824753/pid: no such file or directory
I1210 06:02:48.020468    9253 retry.go:31] will retry after 2.341247ms: open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/scheduled-stop-824753/pid: no such file or directory
I1210 06:02:48.023678    9253 retry.go:31] will retry after 2.182487ms: open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/scheduled-stop-824753/pid: no such file or directory
I1210 06:02:48.026928    9253 retry.go:31] will retry after 4.585154ms: open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/scheduled-stop-824753/pid: no such file or directory
I1210 06:02:48.032128    9253 retry.go:31] will retry after 5.480492ms: open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/scheduled-stop-824753/pid: no such file or directory
I1210 06:02:48.038326    9253 retry.go:31] will retry after 4.668174ms: open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/scheduled-stop-824753/pid: no such file or directory
I1210 06:02:48.043555    9253 retry.go:31] will retry after 12.503718ms: open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/scheduled-stop-824753/pid: no such file or directory
I1210 06:02:48.056801    9253 retry.go:31] will retry after 12.534686ms: open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/scheduled-stop-824753/pid: no such file or directory
I1210 06:02:48.070018    9253 retry.go:31] will retry after 38.644214ms: open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/scheduled-stop-824753/pid: no such file or directory
I1210 06:02:48.109682    9253 retry.go:31] will retry after 27.573894ms: open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/scheduled-stop-824753/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-824753 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-824753 -n scheduled-stop-824753
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-824753
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-824753 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1210 06:03:13.859836  210109 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:03:13.860069  210109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:03:13.860087  210109 out.go:374] Setting ErrFile to fd 2...
	I1210 06:03:13.860091  210109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:03:13.860298  210109 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 06:03:13.860515  210109 out.go:368] Setting JSON to false
	I1210 06:03:13.860583  210109 mustload.go:66] Loading cluster: scheduled-stop-824753
	I1210 06:03:13.860875  210109 config.go:182] Loaded profile config "scheduled-stop-824753": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:03:13.860937  210109 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/scheduled-stop-824753/config.json ...
	I1210 06:03:13.861109  210109 mustload.go:66] Loading cluster: scheduled-stop-824753
	I1210 06:03:13.861202  210109 config.go:182] Loaded profile config "scheduled-stop-824753": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-824753
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-824753: exit status 7 (79.216751ms)

                                                
                                                
-- stdout --
	scheduled-stop-824753
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-824753 -n scheduled-stop-824753
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-824753 -n scheduled-stop-824753: exit status 7 (76.24513ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-824753" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-824753
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-824753: (5.532348664s)
--- PASS: TestScheduledStopUnix (104.38s)

                                                
                                    
x
+
TestInsufficientStorage (7.88s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-009971 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-009971 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (5.678289759s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4471fe36-be4d-4b4d-a135-5f41162404e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-009971] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b1e45b2b-dc1b-4f73-b0b8-5d4930e63f67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22094"}}
	{"specversion":"1.0","id":"0c445bce-0cf2-40ed-ae3e-ea9bee523f64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ecdfca21-8d03-4eec-9577-1a6c631aa681","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig"}}
	{"specversion":"1.0","id":"e2339f0c-25a5-49b0-b33f-412b68782b8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube"}}
	{"specversion":"1.0","id":"f7fecfa5-91d9-4f04-8e73-feb4c043a06b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"3c9e2a9e-5f44-4951-b4f2-58513e8ed976","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2f607547-c440-4ae6-a451-e280a3d127c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"3fb4f9cb-4b71-47d0-89c3-902b8e0545ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"3f50d2ab-a727-443e-93f5-04ca0335389c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"15c6a73c-c050-4c81-bcf9-585dddee36cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"b9367b86-fe89-4c7b-9cd9-3869a884b791","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-009971\" primary control-plane node in \"insufficient-storage-009971\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1fdac8ab-104f-4f34-94fc-9a4d400003c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765275396-22083 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"9fffea73-e12b-4e7f-8f53-fc266a77e70b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"a123f33d-ea16-493a-a632-a1ced4955b79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-009971 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-009971 --output=json --layout=cluster: exit status 7 (278.625114ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-009971","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-009971","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 06:04:10.506192  212572 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-009971" does not appear in /home/jenkins/minikube-integration/22094-5725/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-009971 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-009971 --output=json --layout=cluster: exit status 7 (274.810907ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-009971","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-009971","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 06:04:10.781372  212683 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-009971" does not appear in /home/jenkins/minikube-integration/22094-5725/kubeconfig
	E1210 06:04:10.791402  212683 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/insufficient-storage-009971/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-009971" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-009971
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-009971: (1.642363698s)
--- PASS: TestInsufficientStorage (7.88s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (324.95s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.332907651 start -p running-upgrade-897548 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.332907651 start -p running-upgrade-897548 --memory=3072 --vm-driver=docker  --container-runtime=crio: (22.367052616s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-897548 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-897548 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m59.190000231s)
helpers_test.go:176: Cleaning up "running-upgrade-897548" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-897548
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-897548: (2.614470564s)
--- PASS: TestRunningBinaryUpgrade (324.95s)

                                                
                                    
x
+
TestKubernetesUpgrade (307.03s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-196025 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-196025 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (32.577655766s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-196025
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-196025: (2.367991603s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-196025 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-196025 status --format={{.Host}}: exit status 7 (91.424794ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-196025 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-196025 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m23.678802854s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-196025 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-196025 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-196025 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (76.088848ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-196025] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-rc.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-196025
	    minikube start -p kubernetes-upgrade-196025 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1960252 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-196025 --kubernetes-version=v1.35.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-196025 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-196025 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.75194259s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-196025" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-196025
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-196025: (2.396681016s)
--- PASS: TestKubernetesUpgrade (307.03s)

                                                
                                    
x
+
TestMissingContainerUpgrade (101.84s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.252596254 start -p missing-upgrade-199762 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.252596254 start -p missing-upgrade-199762 --memory=3072 --driver=docker  --container-runtime=crio: (49.158226057s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-199762
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-199762: (10.413237194s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-199762
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-199762 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-199762 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.06144241s)
helpers_test.go:176: Cleaning up "missing-upgrade-199762" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-199762
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-199762: (2.413322478s)
--- PASS: TestMissingContainerUpgrade (101.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-235838 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-235838 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (92.874286ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-235838] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (41.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-235838 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-235838 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (40.721296321s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-235838 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (41.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-235838 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1210 06:04:54.535813    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-589967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-235838 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (6.434563536s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-235838 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-235838 status -o json: exit status 2 (327.311237ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-235838","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-235838
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-235838: (2.079250678s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (3.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-235838 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-235838 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (3.76772001s)
--- PASS: TestNoKubernetes/serial/Start (3.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22094-5725/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-235838 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-235838 "sudo systemctl is-active --quiet service kubelet": exit status 1 (267.403328ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (16.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (15.503641111s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (16.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-235838
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-235838: (2.182463779s)
--- PASS: TestNoKubernetes/serial/Stop (2.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-235838 --driver=docker  --container-runtime=crio
E1210 06:05:29.262111    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-235838 --driver=docker  --container-runtime=crio: (6.260254518s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-235838 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-235838 "sudo systemctl is-active --quiet service kubelet": exit status 1 (268.562994ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.6s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.60s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (284.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.897606802 start -p stopped-upgrade-616121 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.897606802 start -p stopped-upgrade-616121 --memory=3072 --vm-driver=docker  --container-runtime=crio: (22.380928351s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.897606802 -p stopped-upgrade-616121 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.897606802 -p stopped-upgrade-616121 stop: (2.314460867s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-616121 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-616121 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m19.488288744s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (284.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-094798 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-094798 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (149.948459ms)

                                                
                                                
-- stdout --
	* [false-094798] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:06:27.774536  251345 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:06:27.774768  251345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:06:27.774776  251345 out.go:374] Setting ErrFile to fd 2...
	I1210 06:06:27.774780  251345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:06:27.774969  251345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-5725/.minikube/bin
	I1210 06:06:27.775426  251345 out.go:368] Setting JSON to false
	I1210 06:06:27.776518  251345 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2932,"bootTime":1765343856,"procs":318,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:06:27.776563  251345 start.go:143] virtualization: kvm guest
	I1210 06:06:27.778199  251345 out.go:179] * [false-094798] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:06:27.779195  251345 notify.go:221] Checking for updates...
	I1210 06:06:27.779207  251345 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:06:27.780218  251345 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:06:27.781305  251345 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-5725/kubeconfig
	I1210 06:06:27.782373  251345 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-5725/.minikube
	I1210 06:06:27.783279  251345 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:06:27.784240  251345 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:06:27.785799  251345 config.go:182] Loaded profile config "kubernetes-upgrade-196025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:06:27.785946  251345 config.go:182] Loaded profile config "running-upgrade-897548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1210 06:06:27.786053  251345 config.go:182] Loaded profile config "stopped-upgrade-616121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1210 06:06:27.786183  251345 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:06:27.809947  251345 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:06:27.810037  251345 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:06:27.863000  251345 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 06:06:27.853705364 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:06:27.863152  251345 docker.go:319] overlay module found
	I1210 06:06:27.864740  251345 out.go:179] * Using the docker driver based on user configuration
	I1210 06:06:27.865789  251345 start.go:309] selected driver: docker
	I1210 06:06:27.865802  251345 start.go:927] validating driver "docker" against <nil>
	I1210 06:06:27.865824  251345 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:06:27.867590  251345 out.go:203] 
	W1210 06:06:27.868665  251345 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1210 06:06:27.869603  251345 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-094798 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-094798

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-094798

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-094798

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-094798

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-094798

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-094798

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-094798

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-094798

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-094798

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-094798

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-094798"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-094798"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-094798"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-094798

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-094798"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-094798"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-094798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-094798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-094798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-094798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-094798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-094798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-094798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-094798" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-094798"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-094798"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-094798"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-094798"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-094798"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-094798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-094798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-094798" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-094798"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-094798"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-094798"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-094798"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-094798"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 06:04:59 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-196025
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 06:06:15 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: running-upgrade-897548
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 06:06:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: stopped-upgrade-616121
contexts:
- context:
cluster: kubernetes-upgrade-196025
user: kubernetes-upgrade-196025
name: kubernetes-upgrade-196025
- context:
cluster: running-upgrade-897548
user: running-upgrade-897548
name: running-upgrade-897548
- context:
cluster: stopped-upgrade-616121
user: stopped-upgrade-616121
name: stopped-upgrade-616121
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-196025
user:
client-certificate: /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/kubernetes-upgrade-196025/client.crt
client-key: /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/kubernetes-upgrade-196025/client.key
- name: running-upgrade-897548
user:
client-certificate: /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/running-upgrade-897548/client.crt
client-key: /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/running-upgrade-897548/client.key
- name: stopped-upgrade-616121
user:
client-certificate: /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/stopped-upgrade-616121/client.crt
client-key: /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/stopped-upgrade-616121/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-094798

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-094798"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-094798"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-094798"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-094798"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-094798"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-094798"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-094798"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-094798"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-094798"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-094798"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-094798"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-094798"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-094798"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-094798"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-094798"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-094798"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-094798"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-094798"

                                                
                                                
----------------------- debugLogs end: false-094798 [took: 2.991081742s] --------------------------------
helpers_test.go:176: Cleaning up "false-094798" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-094798
--- PASS: TestNetworkPlugins/group/false (3.32s)

                                                
                                    
x
+
TestPause/serial/Start (45.2s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-257171 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1210 06:09:54.536467    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-589967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-257171 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (45.19672719s)
--- PASS: TestPause/serial/Start (45.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-616121
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-616121: (1.010359887s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (54.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-094798 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1210 06:10:29.263035    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-094798 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (54.657897451s)
--- PASS: TestNetworkPlugins/group/auto/Start (54.66s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (8.74s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-257171 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-257171 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (8.725324045s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (8.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (53.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-094798 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-094798 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (53.048982568s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (53.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (60.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-094798 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-094798 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m0.958747009s)
--- PASS: TestNetworkPlugins/group/calico/Start (60.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (64.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-094798 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-094798 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m4.510549446s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (64.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-094798 "pgrep -a kubelet"
I1210 06:11:16.443548    9253 config.go:182] Loaded profile config "auto-094798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-094798 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-mdkzx" [e1bf6844-e705-4ddb-9901-0363fe6ab9d3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-mdkzx" [e1bf6844-e705-4ddb-9901-0363fe6ab9d3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004645397s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-094798 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-094798 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-094798 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-qjbt8" [1498e662-79da-4ced-9cef-99374499af70] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.007293024s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-094798 "pgrep -a kubelet"
I1210 06:11:39.798857    9253 config.go:182] Loaded profile config "kindnet-094798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-094798 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-gd58s" [404c3c0f-54dc-473e-a949-67b76691e1a9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1210 06:11:42.445313    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-604071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-gd58s" [404c3c0f-54dc-473e-a949-67b76691e1a9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.00413756s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-8h68w" [e0b0ebc7-594d-4a33-bcd8-00cbd7b05b51] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-8h68w" [e0b0ebc7-594d-4a33-bcd8-00cbd7b05b51] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003971161s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (71.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-094798 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-094798 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m11.490696996s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (71.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-094798 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-094798 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-094798 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-094798 "pgrep -a kubelet"
I1210 06:11:50.289622    9253 config.go:182] Loaded profile config "calico-094798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-094798 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-967pq" [2c659f20-e927-4a30-ba02-5e13d0c0e8ba] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-967pq" [2c659f20-e927-4a30-ba02-5e13d0c0e8ba] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.003590886s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-094798 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-094798 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-094798 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-094798 "pgrep -a kubelet"
I1210 06:12:00.989145    9253 config.go:182] Loaded profile config "custom-flannel-094798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-094798 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-547c2" [48983c54-2d07-429a-b1aa-1e92a3c9eb50] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-547c2" [48983c54-2d07-429a-b1aa-1e92a3c9eb50] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.004643647s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-094798 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-094798 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-094798 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (55.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-094798 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-094798 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (55.493735716s)
--- PASS: TestNetworkPlugins/group/flannel/Start (55.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (78.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-094798 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-094798 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m18.45972033s)
--- PASS: TestNetworkPlugins/group/bridge/Start (78.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (50.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-725426 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-725426 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (50.557080287s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (50.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-094798 "pgrep -a kubelet"
I1210 06:12:58.640791    9253 config.go:182] Loaded profile config "enable-default-cni-094798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-094798 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-m599s" [a37ce418-56a5-4511-91b7-b28d54ff3a9a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-m599s" [a37ce418-56a5-4511-91b7-b28d54ff3a9a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.004330449s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-28rct" [3a1a7267-9753-4a57-b2f1-e647d6618081] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003518628s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-094798 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-094798 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-094798 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-094798 "pgrep -a kubelet"
I1210 06:13:12.194973    9253 config.go:182] Loaded profile config "flannel-094798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (7.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-094798 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-g62k7" [e0b6c155-1908-4a60-a765-67807a216d66] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-g62k7" [e0b6c155-1908-4a60-a765-67807a216d66] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 7.004047249s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (7.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-094798 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-094798 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-094798 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-725426 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [afc89bbc-2505-4919-a0eb-647322d563cc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [afc89bbc-2505-4919-a0eb-647322d563cc] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003304127s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-725426 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (46.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-468539 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-468539 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (46.406355618s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (46.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-725426 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-725426 --alsologtostderr -v=3: (16.447237535s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-094798 "pgrep -a kubelet"
I1210 06:13:40.067865    9253 config.go:182] Loaded profile config "bridge-094798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-094798 replace --force -f testdata/netcat-deployment.yaml
I1210 06:13:40.562162    9253 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1210 06:13:40.564746    9253 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-ls59g" [4f58496a-8fd6-4f8f-b9f0-f7984f1437ab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-ls59g" [4f58496a-8fd6-4f8f-b9f0-f7984f1437ab] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004313679s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (50.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-028500 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-028500 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3: (50.524664221s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (50.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-725426 -n old-k8s-version-725426
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-725426 -n old-k8s-version-725426: exit status 7 (90.550769ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-725426 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (46.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-725426 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-725426 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (46.199655421s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-725426 -n old-k8s-version-725426
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (46.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-094798 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-094798 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-094798 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-125336 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-125336 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3: (53.245774498s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-468539 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [bae410c4-fff9-404e-a09d-794d0f6bd59d] Pending
helpers_test.go:353: "busybox" [bae410c4-fff9-404e-a09d-794d0f6bd59d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [bae410c4-fff9-404e-a09d-794d0f6bd59d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003751098s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-468539 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (18.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-468539 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-468539 --alsologtostderr -v=3: (18.394079934s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (18.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (6.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-028500 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [e898aa0d-3dca-4ee0-8728-aca196c5331d] Pending
helpers_test.go:353: "busybox" [e898aa0d-3dca-4ee0-8728-aca196c5331d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 6.003752444s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-028500 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (6.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-8jvqp" [d6be06f9-987c-423e-8476-bd6ee21c0520] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005152594s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-028500 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-028500 --alsologtostderr -v=3: (16.483181354s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-468539 -n no-preload-468539
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-468539 -n no-preload-468539: exit status 7 (82.960096ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-468539 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (48.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-468539 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-468539 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (48.600514552s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-468539 -n no-preload-468539
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (48.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-8jvqp" [d6be06f9-987c-423e-8476-bd6ee21c0520] Running
E1210 06:14:45.510436    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/functional-604071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00335478s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-725426 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-725426 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (23.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-218688 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-218688 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (23.881260951s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (23.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-028500 -n embed-certs-028500
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-028500 -n embed-certs-028500: exit status 7 (99.976621ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-028500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (50.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-028500 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-028500 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3: (50.434453155s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-028500 -n embed-certs-028500
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (50.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-125336 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [d26c68d7-e6ef-4b9c-9cb5-08387e67e53f] Pending
helpers_test.go:353: "busybox" [d26c68d7-e6ef-4b9c-9cb5-08387e67e53f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [d26c68d7-e6ef-4b9c-9cb5-08387e67e53f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004113042s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-125336 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (18.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-125336 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-125336 --alsologtostderr -v=3: (18.234887988s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (18.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-218688 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-218688 --alsologtostderr -v=3: (2.498432707s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-218688 -n newest-cni-218688
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-218688 -n newest-cni-218688: exit status 7 (77.72672ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-218688 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-218688 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
E1210 06:15:29.262933    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/addons-193927/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-218688 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (10.639128019s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-218688 -n newest-cni-218688
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-lbt26" [56f38943-b019-4077-ba5a-f28141b21c74] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003541521s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-125336 -n default-k8s-diff-port-125336
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-125336 -n default-k8s-diff-port-125336: exit status 7 (94.746242ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-125336 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-125336 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-125336 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3: (49.255332066s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-125336 -n default-k8s-diff-port-125336
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-218688 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
I1210 06:15:36.292592    9253 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-lbt26" [56f38943-b019-4077-ba5a-f28141b21c74] Running
I1210 06:15:36.487723    9253 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
I1210 06:15:36.614493    9253 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003947766s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-468539 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-468539 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
I1210 06:15:41.631889    9253 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
I1210 06:15:41.782675    9253 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
I1210 06:15:41.951200    9253 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-vrlx4" [d0ef6401-bda6-4954-8ac1-662c0b3daa63] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003243219s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-vrlx4" [d0ef6401-bda6-4954-8ac1-662c0b3daa63] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002510653s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-028500 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-028500 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
I1210 06:15:59.249879    9253 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
I1210 06:15:59.387020    9253 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
I1210 06:15:59.520619    9253 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-ccjtq" [df90f057-bca7-448f-9c97-e9439334019b] Running
E1210 06:16:26.915639    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/auto-094798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003055229s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-ccjtq" [df90f057-bca7-448f-9c97-e9439334019b] Running
E1210 06:16:33.464627    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/kindnet-094798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:16:33.471007    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/kindnet-094798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:16:33.482300    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/kindnet-094798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:16:33.503591    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/kindnet-094798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:16:33.544914    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/kindnet-094798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:16:33.626672    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/kindnet-094798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:16:33.788183    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/kindnet-094798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:16:34.109924    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/kindnet-094798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003125085s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-125336 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-125336 image list --format=json
E1210 06:16:34.751627    9253 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/kindnet-094798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
I1210 06:16:34.897639    9253 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
I1210 06:16:35.053703    9253 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
I1210 06:16:35.207668    9253 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.67s)

                                                
                                    

Test skip (33/415)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
13 TestDownloadOnly/v1.34.3/preload-exists 0.13
16 TestDownloadOnly/v1.34.3/kubectl 0
23 TestDownloadOnly/v1.35.0-rc.1/cached-images 0
24 TestDownloadOnly/v1.35.0-rc.1/binaries 0
25 TestDownloadOnly/v1.35.0-rc.1/kubectl 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
134 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
135 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
136 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv 0
226 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig 0
227 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
228 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS 0
262 TestGvisorAddon 0
284 TestImageBuild 0
285 TestISOImage 0
349 TestChangeNoneUser 0
352 TestScheduledStopWindows 0
354 TestSkaffold 0
376 TestNetworkPlugins/group/kubenet 3.31
384 TestNetworkPlugins/group/cilium 3.5
390 TestStartStop/group/disable-driver-mounts 0.16
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/preload-exists (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/preload-exists
I1210 05:28:38.919656    9253 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
W1210 05:28:38.965238    9253 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
W1210 05:28:39.049251    9253 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
aaa_download_only_test.go:113: No preload image
--- SKIP: TestDownloadOnly/v1.34.3/preload-exists (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:765: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-094798 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-094798

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-094798

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-094798

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-094798

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-094798

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-094798

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-094798

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-094798

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-094798

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-094798

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-094798"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-094798"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-094798"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-094798

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-094798"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-094798"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-094798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-094798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-094798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-094798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-094798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-094798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-094798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-094798" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-094798"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-094798"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-094798"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-094798"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-094798"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-094798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-094798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-094798" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-094798"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-094798"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-094798"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-094798"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-094798"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 06:04:59 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-196025
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 06:06:15 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: running-upgrade-897548
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 06:06:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: stopped-upgrade-616121
contexts:
- context:
cluster: kubernetes-upgrade-196025
user: kubernetes-upgrade-196025
name: kubernetes-upgrade-196025
- context:
cluster: running-upgrade-897548
user: running-upgrade-897548
name: running-upgrade-897548
- context:
cluster: stopped-upgrade-616121
user: stopped-upgrade-616121
name: stopped-upgrade-616121
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-196025
user:
client-certificate: /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/kubernetes-upgrade-196025/client.crt
client-key: /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/kubernetes-upgrade-196025/client.key
- name: running-upgrade-897548
user:
client-certificate: /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/running-upgrade-897548/client.crt
client-key: /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/running-upgrade-897548/client.key
- name: stopped-upgrade-616121
user:
client-certificate: /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/stopped-upgrade-616121/client.crt
client-key: /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/stopped-upgrade-616121/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-094798

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-094798"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-094798"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-094798"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-094798"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-094798"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-094798"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-094798"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-094798"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-094798"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-094798"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-094798"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-094798"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-094798"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-094798"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-094798"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-094798"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-094798"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-094798"

                                                
                                                
----------------------- debugLogs end: kubenet-094798 [took: 3.153225759s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-094798" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-094798
--- SKIP: TestNetworkPlugins/group/kubenet (3.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-094798 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-094798

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-094798

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-094798

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-094798

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-094798

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-094798

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-094798

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-094798

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-094798

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-094798

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094798"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094798"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094798"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-094798

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094798"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094798"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-094798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-094798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-094798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-094798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-094798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-094798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-094798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-094798" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094798"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094798"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094798"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094798"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094798"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-094798

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-094798

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-094798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-094798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-094798

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-094798

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-094798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-094798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-094798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-094798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-094798" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094798"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094798"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094798"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094798"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094798"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 06:04:59 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-196025
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 06:06:15 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: running-upgrade-897548
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22094-5725/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 06:06:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: stopped-upgrade-616121
contexts:
- context:
cluster: kubernetes-upgrade-196025
user: kubernetes-upgrade-196025
name: kubernetes-upgrade-196025
- context:
cluster: running-upgrade-897548
user: running-upgrade-897548
name: running-upgrade-897548
- context:
cluster: stopped-upgrade-616121
user: stopped-upgrade-616121
name: stopped-upgrade-616121
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-196025
user:
client-certificate: /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/kubernetes-upgrade-196025/client.crt
client-key: /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/kubernetes-upgrade-196025/client.key
- name: running-upgrade-897548
user:
client-certificate: /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/running-upgrade-897548/client.crt
client-key: /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/running-upgrade-897548/client.key
- name: stopped-upgrade-616121
user:
client-certificate: /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/stopped-upgrade-616121/client.crt
client-key: /home/jenkins/minikube-integration/22094-5725/.minikube/profiles/stopped-upgrade-616121/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-094798

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094798"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094798"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094798"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094798"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094798"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094798"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094798"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094798"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094798"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094798"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094798"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094798"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094798"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094798"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094798"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094798"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094798"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-094798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094798"

                                                
                                                
----------------------- debugLogs end: cilium-094798 [took: 3.335049327s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-094798" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-094798
--- SKIP: TestNetworkPlugins/group/cilium (3.50s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-569732" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-569732
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard